The phrase hate speech is protected by the first amendment is a useful starting point but it requires nuance. Many offensive or hateful statements remain within constitutional protection unless they meet narrow doctrinal tests such as incitement, obscenity, true threats, or defamation. This article walks through those tests, gives representative examples, and flags unresolved issues in digital contexts.
If ‘hate speech is protected by the first amendment,’ what does that mean?
The short point is that the claim hate speech is protected by the first amendment captures an important baseline: many hateful expressions remain within constitutional protection unless they meet narrow, settled tests that remove that protection.
Caution is needed, however. Courts recognize specific categories of unprotected speech such as incitement, obscenity, true threats, fighting words, and some defamation, and they apply distinct tests to each category rather than treating all hateful words as outside the First Amendment. This approach reflects judicial caution about creating broad new exceptions to free speech doctrine, a theme the Court has emphasized in recent decisions United States v. Stevens opinion.
Stay informed and get involved
For readers who want the primary opinions discussed below, consult the cited cases and the linked court texts in each section for the full legal language and context.
This opening does not mean hateful or offensive speech is unregulated in every setting. Speech that crosses into incitement, true threats, or other narrow categories can be subject to government regulation or criminal sanction when the legal criteria are met.
In this article we outline the governing tests, give representative examples courts have treated as unprotected, and flag unsettled issues that arise in online amplification and platform contexts.
The incitement standard: Brandenburg v. Ohio
Brandenburg v. Ohio establishes the controlling incitement test: advocacy is unprotected only when it is directed to inciting imminent lawless action and is likely to produce such action Brandenburg v. Ohio opinion.
That rule has two core components. First, the speech must be intended to produce imminent unlawful conduct. Second, there must be a real likelihood that the conduct will occur imminently as a result of the speech.
Because of these elements, general advocacy of illegal acts at an abstract level tends to remain protected. A speaker who praises violence in general terms but does not direct listeners to act immediately is usually still protected speech under Brandenburg.
By contrast, a call to a crowd to commit a specified illegal act at a particular place and time, made when the crowd can immediately act and when violence is likely to follow, will more readily meet the Brandenburg test and fall outside First Amendment protection Brandenburg v. Ohio opinion.
Obscenity and the Miller test
Obscenity is one narrow category the Court has allowed states to regulate outside First Amendment protection, but the test is specific and fact intensive. Miller v. California sets out three parts: whether the material, applying contemporary community standards, depicts sexual conduct defined by law; whether the work appeals to prurient interest; and whether the work lacks serious literary, artistic, political, or scientific value Miller v. California opinion.
Because the Miller test ties part of the analysis to local community standards, what qualifies as obscene can vary across jurisdictions. Courts therefore examine both the content at issue and the context in which it appears.
Courts treat a few narrow categories as unprotected, including incitement of imminent lawless action, obscene material that meets the Miller test, true threats and intimidation, fighting words in limited cases, and some defamatory falsehoods depending on the plaintiff's status and fault standard.
In practice, prosecutions for obscenity require evidence that the material meets all Miller elements, and courts are careful not to conflate unpopular or offensive content with legally obscene material.
Where a work has recognized literary, artistic, political, or scientific value, that quality will generally protect it from an obscenity finding even if parts of the content are sexual or offensive.
Defamation and the ‘actual malice’ rule for public figures
Defamation law allows individuals to seek remedies for false statements that harm reputation, but the rules differ depending on whether the subject is a private person or a public figure. New York Times Co. v. Sullivan established that public officials and public figures must show actual malice to prevail, meaning that a defendant published a false statement knowing it was false or with reckless disregard for the truth New York Times Co. v. Sullivan opinion.
That higher standard narrows the range of defamatory speech that government can punish when it concerns public persons, reflecting the Court’s interest in vigorous public debate about government and public life.
For private individuals, many states permit recovery on lesser fault standards, so identical false statements may lead to liability in one case and not in another depending on the plaintiff’s status and the jurisdictional law.
When reporting or commenting on public figures, careful sourcing and attribution reduce the risk of defamation claims and help readers understand whether statements are fact or opinion.
True threats and intimidation, including cross burning
Courts treat true threats as unprotected speech when a statement is meant to intimidate and places its target in fear of violence or death. Such statements can be criminally punished when the required intent or context is established Virginia v. Black opinion.
Symbolic acts can also become unprotected when they are intended to intimidate. The Court has addressed cross burning as an example: when the act is intended to intimidate a person or group, it may be treated as a punishable true threat or intimidation rather than protected symbolic expression Virginia v. Black opinion.
Context, audience, and the actor’s purpose matter for true threat analysis. Courts examine whether a reasonable person in the target’s position would feel threatened and whether the actor intended to communicate a serious expression of intent to harm.
The line between provocative rhetoric and an actionable threat can be narrow. Intent and circumstances shape the legal outcome, and courts will review evidence of both when deciding prosecutions or civil claims.
When intent matters: Elonis and the role of mens rea
Elonis v. United States clarified that prosecutions for threatening speech must account for the speaker’s mental state; the Court emphasized that liability cannot rest solely on how a reasonable person would perceive the words Elonis v. United States opinion.
That decision highlights the difference between objective tests, which focus on how speech appears to others, and subjective tests, which look to the speaker’s purpose, knowledge, or recklessness. Where courts require subjective intent, prosecutors must present evidence that the speaker acted with the requisite state of mind.
quick legal research checklist for looking up Supreme Court opinions
Use official court texts when possible
Different circuits have followed Elonis in various ways, and the presence or absence of a required mens rea can change case outcomes substantially, especially in online contexts where statements may be ambiguous.
For readers, understanding whether a jurisdiction applies an objective or subjective standard helps predict whether particular threatening statements will be treated as criminal or remain protected speech.
Limits on new categories: United States v. Stevens and judicial caution
The Supreme Court has made clear that it is cautious about creating novel, broad exceptions to the First Amendment. In United States v. Stevens the Court rejected a proposed new categorical exception for depictions of certain violent conduct, favoring instead reliance on established tests and doctrines United States v. Stevens opinion.
That reluctance means courts will generally try to fit contested speech within established categories like incitement, obscenity, or threats rather than inventing new classes of unprotected speech without close analysis.
As a result, many calls to expand unprotected categories face a high bar in appellate review, and litigants seeking new exceptions confront the Court’s preference for narrow, doctrine-based rules.
How courts apply these tests in real cases
When judges analyze whether speech is protected, they routinely examine context, the identity and status of the speaker and target, the immediacy of any alleged call to action, and evidence about likely effect. These factual inquiries shape how the legal tests operate in practice.
For example, courts cite Brandenburg when evaluating whether political speech crossed into imminent incitement, and they apply Miller when adjudicating obscenity claims, with each test steering the fact-finding process toward particular evidentiary points Brandenburg v. Ohio opinion.
Appellate courts review both legal interpretation and factual findings, and different circuits sometimes reach different outcomes on close questions of context or mens rea. That variation means holdings can change as higher courts clarify doctrine.
Judges also weigh the practical consequences of rulings; decisions that too readily permit criminalization of speech risk chilling lawful expression, while decisions that too narrowly construe threats or incitement can leave targets unprotected.
Common examples courts have found unprotected
Representative examples tied to doctrine help clarify the boundaries. Courts have treated calls that directly urge an immediate violent act and that are likely to produce violence as incitement under Brandenburg; prosecutions of such conduct rest on the imminence and likelihood elements Brandenburg v. Ohio opinion.
Courts have also upheld prosecutions for obscene material that meets all Miller elements, where community standards and the lack of serious value were proven Miller v. California opinion.
Defamatory false statements published with actual malice about public figures have been found actionable under New York Times Co. v. Sullivan, reflecting the higher bar for public-person plaintiffs in defamation law New York Times Co. v. Sullivan opinion.
And statements or symbolic acts aimed at intimidating a group or individual, when proven to convey a serious threat, have been treated as true threats and fall outside First Amendment protection Virginia v. Black opinion.
Borderline cases and unresolved questions
Hateful speech that does not meet a controlling test remains protected in many circumstances. For example, rhetoric that expresses hostility toward a group but stops short of a directed, imminent call to violence or a true threat will typically remain within the scope of protected expression.
The fighting words doctrine is narrow and rarely successful today. It applies to face-to-face utterances likely to provoke an immediate breach of the peace, but courts have limited its reach and stressed contextual analysis in modern cases.
Because many legal outcomes depend on facts about intent, context, and the likelihood of harm, courts often resolve borderline cases through careful fact-finding rather than broad categorical rulings.
Digital platforms, amplification, and the unsettled terrain
Applying these doctrinal tests online raises hard questions. Social media can amplify messages rapidly and to wide audiences, and algorithmic recommendation may affect how likely speech is to produce harm, but courts are still working through how traditional tests apply in that environment. For further discussion see freedom of expression and social media and analysis by EFF.
The Supreme Court has not adopted a comprehensive new doctrine for platform amplification, and debates about platform liability, moderation, and the interaction with First Amendment principles continue in litigation and legislation rather than in settled high court precedent (see Murthy v. Missouri and related coverage).
Readers should watch for cases and statutes that address platform-specific issues, and should not assume that amplification alone changes the underlying doctrinal tests without a court or legislature saying so. See commentary from civil liberties groups such as ACLU for one perspective on recent rulings.
Practical advice for readers and journalists
When evaluating claims about whether speech is unprotected, check the primary sources. Read the controlling opinions cited in this article to confirm the specific tests and how courts applied them in context New York Times Co. v. Sullivan opinion. For background on the First Amendment basics see first amendment explained.
Use attribution in reporting. Phrases like according to the cited case or the court held help distinguish a legal summary from an asserted fact and reduce the chance of overstating a claim about legal status.
Be cautious about labeling speech as unprotected based solely on its content. Where possible, identify the doctrinal test you think applies and explain which elements are met and which are not.
State laws, enforcement differences, and local context
States may prosecute obscenity or threatening conduct consistent with Supreme Court tests, but enforcement and priorities vary. Prosecutorial discretion and local statutes materially affect how laws are applied on the ground Miller v. California opinion.
Readers should consult state statutes and local case law for precise rules in their jurisdiction, because the same speech may trigger different legal responses depending on local standards and enforcement choices. See the site’s constitutional rights resources for more.
Summary: key takeaways about unprotected speech
Key categories of unprotected speech include incitement to imminent lawless action, obscene material that meets the Miller test, true threats and intimidation, fighting words in narrow circumstances, and some defamatory falsehoods subject to the applicable fault standard.
The controlling tests to remember include Brandenburg for incitement, Miller for obscenity, and New York Times Co. v. Sullivan for defamation involving public figures. These cases provide the doctrinal framework courts use to evaluate whether speech is protected Brandenburg v. Ohio opinion.
Intent, context, speaker status, and the likelihood of harm are central to the analysis, and unsettled issues remain about how these doctrines apply online.
For further reading, review the primary Supreme Court opinions cited in this article to see the exact legal language and the fact patterns on which courts relied.
No. Much hateful speech remains constitutionally protected unless it meets narrow exceptions such as incitement, true threats, obscenity, or defamation under established legal tests.
Potentially, yes. Online posts may be unprotected if they meet the same legal tests courts use for in-person speech, but how amplification affects those tests is still evolving in courts and statutes.
Check primary court opinions and statutes, attribute claims to the relevant case or law, and explain which legal elements are met rather than using broad labels without source support.
This guide aims to be a neutral resource. For case texts and further research, refer to the linked Supreme Court opinions cited above.
References
- https://www.law.cornell.edu/supremecourt/text/559/460
- https://www.law.cornell.edu/supremecourt/text/395/444
- https://www.law.cornell.edu/supremecourt/text/413/15
- https://www.law.cornell.edu/supremecourt/text/376/254
- https://www.law.cornell.edu/supremecourt/text/538/343
- https://michaelcarbonara.com/contact/
- https://michaelcarbonara.com/issue/constitutional-rights/
- https://michaelcarbonara.com/freedom-of-expression-and-social-media-impact/
- https://michaelcarbonara.com/first-amendment-explained-five-freedoms/
- https://www.supremecourt.gov/opinions/23pdf/23-411_3dq3.pdf
- https://www.aclu.org/press-releases/supreme-court-ruling-underscores-importance-of-free-speech-online
- https://www.eff.org/deeplinks/2024/08/through-line-suprme-courts-social-media-cases-same-first-amendment-rules-apply
