What kind of speech is not protected by free speech? A clear legal guide

What kind of speech is not protected by free speech? A clear legal guide
Free speech is a core American value, but courts have long recognized limits. This guide explains the main categories of speech U.S. courts treat as unprotected or less protected, why those categories exist, and how the rules are applied in practice.

The focus is legal tests from Supreme Court opinions and practical examples that help civic-minded readers understand when speech may fall outside First Amendment protection. Where relevant, the article points to the primary cases so readers can check the source material for themselves.

A small set of narrowly defined categories receive little or no First Amendment protection.
Supreme Court tests like Brandenburg and Miller guide lower courts and hinge on intent, context, and immediacy.
Private platform rules and state laws often operate separately from constitutional protections.

Short answer: what free speech court cases say about unprotected speech

One-sentence summary: free speech court cases

U.S. law recognizes several narrow categories of speech that receive little or no First Amendment protection, including incitement, fighting words, obscenity, defamation, true threats, and some limits on commercial speech.

These categories are defined by legal tests developed by the Supreme Court and applied in many lower-court decisions; for example, the incitement standard comes from Brandenburg v. Ohio and is a central test courts use to decide when speech may be punished by the state Brandenburg v. Ohio.

Quick checklist to identify a possible unprotected speech category

Use with evidence saved

Why this matters for ordinary people: knowing the main categories helps people understand when government action, criminal law, or civil remedies may apply rather than assuming every harsh or offensive statement is constitutionally protected.

The legal rules are narrowly drawn and highly fact specific, so small changes in context, audience, timing, or wording can change an outcome.

Definitions and legal context from key free speech court cases

What legal ‘categories’ mean

In First Amendment law, a legal category is a label courts use to group similar kinds of speech and then apply a test to decide whether government regulation is permissible; courts rely on tests to give consistent rules across different cases.

Those tests come from Supreme Court opinions that set standards for lower courts to follow. For example, the Supreme Court articulated the three-part obscenity test commonly called the Miller test, which judges use to determine when sexually explicit material falls outside constitutional protection Miller v. California.


Michael Carbonara Logo

How Supreme Court precedent shapes lower court outcomes

When the Supreme Court sets a test, lower courts apply that test to the facts before them and often refine how the test works in different contexts; the baseline from the high court is critical, but many outcomes depend on fact-specific findings about intent and effect.

For example, the Brandenburg incitement test is the standard lower courts use to decide when speech that seems to encourage illegal acts is punishable, but courts must examine whether the speech was directed to and likely to produce imminent lawless action Brandenburg v. Ohio and see related explanatory materials like the Brandenburg test overview.

Major categories and the tests courts use in free speech court cases

Incitement and the Brandenburg standard

Incitement is unprotected when speech is aimed at producing imminent lawless action and is likely to produce that action; this two-part standard comes from Brandenburg v. Ohio and is strict about both intent and immediacy Brandenburg v. Ohio.

Plain example: a speaker who urges a crowd to commit violence immediately, with words and context making it likely the crowd will act, may lose First Amendment protection under the Brandenburg test.

Fighting words and Chaplinsky

The fighting-words category traces to Chaplinsky v. New Hampshire and covers face-to-face insults or provocations likely to provoke an immediate breach of the peace; courts treat this category narrowly and focus on the likelihood of an immediate violent response Chaplinsky v. New Hampshire.

Concrete example: a deliberately shouted personal insult at someone in close proximity that foreseeably triggers a violent reaction could fall outside protection as fighting words.

Obscenity and the Miller test

Obscenity is governed by the Miller three-part test: material is obscenity if it appeals to prurient interest under community standards, depicts sexual conduct in a patently offensive way, and lacks serious literary, artistic, political, or scientific value; only when all three prongs are met does the material lose First Amendment protection Miller v. California.

Example: purely pornographic material that meets the Miller criteria in a given community may be regulated, while material with recognized artistic or political value typically remains protected.

Read the primary court opinions for context

Read the controlling Supreme Court opinions to see how tests are written and applied in full.

View Supreme Court opinions

Defamation and New York Times v. Sullivan

Defamation law allows civil liability for false statements of fact that harm reputation, but when the plaintiff is a public official or public figure the plaintiff must prove actual malice, meaning the statement was made knowing it was false or with reckless disregard for the truth, a rule set out by New York Times Co. v. Sullivan New York Times Co. v. Sullivan.

Practical note: public figures face a higher proof standard in defamation cases; private individuals typically need to show less demanding forms of fault to recover for reputational harm.

True threats and Virginia v. Black

Statements that qualify as true threats, where the speaker intends to intimidate and the statement should reasonably be understood as a serious expression of an intent to harm, are not protected; Virginia v. Black clarified that cross-burning and similar conduct with intent to intimidate can be treated as a true threat outside the First Amendment Virginia v. Black.

Example: a communicated threat against a specific person that is meant to cause fear and is credible may be criminally prosecutable as a true threat.

Commercial speech and the Central Hudson test

Commercial speech receives intermediate protection. The Central Hudson test asks whether the speech concerns lawful activity and is not misleading, and if so whether the government interest is substantial and the regulation is narrowly tailored; false or misleading commercial claims can be regulated or treated as unprotected Central Hudson.

Clear example: an advertisement that makes demonstrably false claims about a product may be subject to regulation or enforcement even if political speech nearby would be protected.

How courts decide: key decision criteria and how tests are applied

Intent, context, and likelihood

Courts regularly look at intent, context, and likelihood when applying tests like Brandenburg and Chaplinsky; the Brandenburg test explicitly requires that the speech be directed to producing imminent lawless action and likely to do so, making both intent and probability central factors Brandenburg v. Ohio.

That analysis often forces judges to weigh surrounding facts such as the speaker’s words, tone, the audience’s composition, and whether events were imminent or speculative.

Public versus private actors and the role of forum

The constitutional limits on speech apply to government action; private platforms and private employers can set their own rules even for speech that the First Amendment would protect against government regulation. For more on constitutional limits, see the site’s overview on constitutional rights.

Additionally, whether speech occurs in a traditional public forum, a limited public forum, or on private property affects how courts analyze restrictions and which standards apply.

How courts treat statements about public figures versus private persons

Defamation doctrine distinguishes public figures and private persons: public figures must show actual malice under New York Times Co. v. Sullivan, while private persons generally need to show a lower level of fault to succeed in defamation claims New York Times Co. v. Sullivan.

In practice, that difference means that speech criticizing politicians or other public figures is harder to turn into a successful lawsuit unless there is clear evidence of knowingly false statements or reckless disregard for truth.

Common misunderstandings and typical legal pitfalls

Mixing up offensive speech with unprotected categories

Do not assume that hateful, insulting, or offensive speech is automatically unprotected; courts often protect offensive ideas unless the statement fits a defined unprotected category like a true threat or fighting words.

Calling speech offensive is different from meeting the legal tests that remove protection, which look for specific intent, immediacy, and likelihood of harm.

Assuming all online harassment is unprotected

Online harassment may sometimes be punishable, but courts examine whether the online statements meet established tests for incitement, threats, or defamation; mere nastiness online does not by itself prove a legal exception.

Court and legislatures are still adapting longstanding tests to digital contexts, where timing, reach, and anonymity complicate traditional immediacy and intent questions. See discussion of the evolution of incitement online for one perspective on adapting tests to internet-era speech.

Confusing private platform moderation with constitutional rules

Private platforms can remove or moderate content under their terms of service regardless of whether the First Amendment would allow government restriction; this distinction often surprises readers who see content removed and assume a constitutional violation.

In short, private moderation is a separate set of rules and remedies from constitutional law, and remedies against platforms usually arise from contract, platform policy, or statutory regimes rather than the First Amendment.

Practical scenarios: schools, workplaces, advertising, and threats

Student speech and school safety limits

Schools receive some authority to regulate student speech when it would materially disrupt school activities or threaten safety, and courts balance student expression against educational mission and safety concerns.

When speech on campus crosses into true threats or incitement to immediate lawless action, courts have allowed disciplinary measures consistent with the special context of schools.

Employee speech and employer policies

Private employers typically can discipline employees for workplace speech under company policies, while public employers must respect constitutional protections; context matters, including whether speech occurs during work hours or concerns workplace operations.

Takeaway: employees should review employer policies and understand that a public-sector employer’s regulation of employee speech must meet constitutional tests, while private employers operate under different rules.

Commercial speech examples and false advertising

In advertising, the Central Hudson framework lets regulators restrict false or misleading commercial claims and require disclosures when government interests are substantial and the regulation is proportional Central Hudson.

Concrete scenario: a business that advertises a medical product with false efficacy claims may face enforcement even if the marketer argues that the statements are expressive or promotional in nature.

When threats cross the line to criminal conduct

True threats require a showing that a reasonable listener would interpret the statement as a real intent to harm and that the speaker intended to intimidate; Virginia v. Black discusses how conduct like cross-burning can be treated as an intimidation-based threat outside First Amendment protection Virginia v. Black.

Practical takeaway: if a communicated threat targets an individual and is credible, it can trigger criminal statutes rather than constitutional protection for the speech.

What to do if you think speech crosses the line: rights and remedies

When to document and report

If you believe speech may be unlawful, preserve records: take screenshots, note dates and witnesses, and save copies of messages or recordings; documentation matters for later review by authorities or counsel.

Documenting the context helps lawyers and investigators assess whether the speech fits a category like a threat, defamation, or incitement to immediate unlawful action.

Courts recognize narrow categories such as incitement, fighting words, obscenity, defamation, true threats, and some commercial speech limits; each category is defined by specific Supreme Court tests and depends on factual context.

When to consult a lawyer

Consult legal counsel when considering defamation claims, threats, or complex cross-jurisdictional online harms; for defamation involving public figures the actual-malice standard is difficult and legal advice is recommended before filing suit New York Times Co. v. Sullivan. If you need to speak with counsel, you can contact a lawyer through the site’s contact page.

Lawyers can advise on civil remedies such as defamation lawsuits, criminal reporting for threats, or administrative complaints for false commercial claims under the appropriate regulatory scheme.

How courts and regulators respond

Remedies vary by category: criminal prosecution is possible for true threats or incitement, civil suits are common for defamation, and regulatory enforcement often targets false commercial speech under consumer protection laws.

Timeliness matters: statutes of limitations and prompt reporting can affect available remedies and how evidence is preserved for legal processes.

Emerging questions in free speech court cases: online threats, harassment, and AI-generated content

How immediacy is being interpreted for online speech

Court decisions are still developing around how the Brandenburg requirement of imminence applies online; courts ask whether a statement posted digitally was directed and likely to produce immediate lawless action in the same way as an in-person call to violence.

Because online speech can spread quickly and reach large audiences, judges examine timing, directness, and the realistically foreseeable effect of the message when assessing imminence.

AI-generated content and attribution problems

AI-generated speech raises new issues about intent and attribution because it can be difficult to say who authored or directed a statement; that uncertainty complicates tests that focus on the speaker’s intent or state of mind.

Courts and policymakers are considering how existing doctrines apply when synthetic content is widely disseminated and where the human role in creation is unclear.

State laws and private platform rules interacting with constitutional doctrine

Even when the First Amendment would protect speech from government restriction, state laws and private platform policies can impose practical constraints; the interaction between those rules and constitutional doctrine remains an active area of litigation and policy work.

Readers should note that a statement may be legally protected yet still subject to platform removal or civil penalties under state consumer protection laws depending on the context.


Michael Carbonara Logo

Summary: what readers should remember from free speech court cases

Key takeaways

1) Unprotected categories are narrow: incitement, fighting words, obscenity, defamation, true threats, and some commercial speech limits.

2) The Supreme Court sets tests that lower courts apply; examples include Brandenburg for incitement, Miller for obscenity, and New York Times v. Sullivan for defamation Brandenburg v. Ohio.

3) Outcomes are highly fact specific; context, intent, audience, and immediacy matter in every case.

Where to find primary sources

Primary Supreme Court opinions are publicly available and useful for readers who want the exact tests and reasoning; the opinions discussed above provide the baseline for most later decisions.

Look up the named cases to read the full opinions and how the Court explained each test and standard in context. For the full text of the Brandenburg opinion see a court host at Justia.

No. Offensive or hateful speech is often protected unless it meets a specific legal test such as a true threat, fighting words, or defamation.

Yes. Private platforms set and enforce their own rules independently of constitutional limits on government action.

Document the material and seek legal advice, especially for defamation or credible threats; public-figure defamation claims require higher proof.

Understanding the boundary between protected and unprotected speech helps citizens, students, and community leaders assess disputes and seek remedies when speech causes real harm. The Supreme Court opinions named in this article set the baseline, but courts decide many cases based on specific facts, so professional legal advice may be necessary in particular disputes.

For civic actors, remember that private platforms and state laws add practical constraints beyond constitutional doctrine.

References