The focus is legal and explanatory. Readers who need case specific advice should consult primary opinions and a licensed attorney.
What the right to speech and expression covers and why limits exist
Short definition and scope
The right to speech and expression is a constitutional guarantee under the First Amendment that protects a wide range of public and private communication. Courts have long held that this right is robust, but they also recognize narrow categories of speech that can be regulated because they cause or are likely to cause concrete harms.
Limits exist because some forms of communication can lead to imminent physical harm, invade others rights, or create legally cognizable injuries that civil law can remedy. For accessible context on where courts draw the line, consult a neutral overview from a civil liberties organization for summaries of the doctrine and categories. See the Brandenburg test on LII Brandenburg test – LII.
A brief three step checklist to evaluate speech categories
Use as a starting guide
The outline below follows leading case law and neutral summaries so readers can locate primary opinions and follow the tests courts apply. See the constitutional rights hub constitutional rights.
Why the law recognizes some limits
The law permits limits when speech is tied to a specific, demonstrable harm that legal rules are designed to prevent. These limits are not broad exceptions but targeted responses where the balance of rights and harms favors regulation or private remedy instead of absolute protection.
How courts balance speech and public safety
Judges evaluate the category of speech, the context in which it occurred, whether the speaker intended harm, and the status of the speaker. Those factors shape whether speech is treated as protected political or expressive conduct, or instead falls into a recognized unprotected category.
How courts decide when speech is protected: the key tests
Overview of controlling standards
Several Supreme Court tests guide modern analysis. The Brandenburg standard controls incitement law, the Miller test governs obscenity, Sullivan sets the actual malice rule for public figures in defamation cases, and Chaplinsky established the original fighting words concept. These tests are the starting point for most court inquiries and help explain why some speech receives less protection than other speech.
Criminal and civil contexts differ: criminal cases focus on statutes and often require proof of a culpable mental state, while civil defamation claims use tort standards to determine fault and damages.
Differences between criminal and civil contexts
In criminal prosecutions the government must prove elements of a statute and often a mens rea element. In civil suits plaintiffs must prove the elements of defamation or other torts and show harm, usually through damages or equitable remedies. The burden of proof and the legal elements vary across contexts.
Role of the speaker’s status and intent
The speaker’s status as a public or private figure changes the fault standard in defamation cases, and courts often examine intent or likelihood when determining criminal liability for speech that risks harm.
When advocacy becomes unprotected incitement under Brandenburg
The imminence and likelihood requirement
Speech is unprotected as incitement when it is directed to producing imminent lawless action and is likely to produce such action. That two part test requires both intent to incite and a real, near term likelihood that the speech will cause unlawful conduct, under the Supreme Court’s Brandenburg standard Brandenburg v. Ohio opinion.
Because both elements are required, mere advocacy of an idea or even advocacy of illegal conduct at a nonimminent time is usually protected political or rhetorical speech.
Learn more from primary sources and neutral summaries
The primary opinion provides the controlling test courts use for incitement inquiries and is the best place to start when evaluating claims of unlawful advocacy.
How Brandenburg is applied by courts
Court applications focus on context, timing, and the speaker’s apparent purpose. Judges ask whether the words were targeted to immediate action and whether the circumstances made unlawful action likely. Courts treat the determination as highly fact specific and often review the parties’ records and surrounding conduct closely.
Examples of conduct that has been found to be incitement
Illustrative examples in decisions typically involve speech that directed a crowd to commit violent acts in the immediate setting or specific instructions timed to produce unrest. Hypothetical or abstract calls for illegal behavior have repeatedly been held protected where they lack immediacy or clear likelihood of causing lawless action.
True threats and threatening speech after Elonis
What courts mean by a true threat
True threats are statements where a reasonable person would interpret the communication as a serious expression of intent to harm another person. These communications can be prosecuted or otherwise regulated because they place individuals in fear and can produce real danger.
The Elonis decision and mens rea
The Supreme Court in Elonis emphasized that criminal liability for threatening speech turns on the speaker’s mental state in many contexts, not solely on whether a reasonable listener would have felt threatened Elonis opinion. That decision narrowed some lines of criminal liability by highlighting mens rea considerations.
How courts treat online threats
Online posts raise special questions about intent, audience, and immediacy. Courts examining digital threats often scrutinize the speaker’s purpose and surrounding facts to determine whether a communication qualifies as a true threat under established tests.
Defamation: how libel and slander are treated differently for public and private figures
Elements of defamation and remedies
A defamation claim requires a false statement of fact, publication to a third party, fault by the speaker, and damages or other legally cognizable harm. Civil remedies for defamation typically include damages and, in some cases, injunctive relief or corrections.
Courts commonly identify five categories that receive little or no First Amendment protection: incitement to imminent lawless action, true threats, defamation, obscenity, and fighting words, subject to fact specific tests and judicial interpretation.
When the plaintiff is a public official or public figure, the Supreme Court requires proof of actual malice under New York Times Co. v. Sullivan, meaning the statement was made with knowledge of falsity or with reckless disregard for the truth New York Times Co. v. Sullivan opinion. Private-figure plaintiffs face a lower fault standard set by state law, and states vary in how they allocate fault and damages.
Actual malice standard for public figures
The actual malice requirement raises the bar for public-figure plaintiffs because it demands clear proof about the speaker’s knowledge or reckless conduct. That standard reflects a balance between protecting reputation and preserving robust public debate about government and public affairs.
State law standards for private-figure claims
Private-figure plaintiffs generally need only show negligence or another lower fault standard under state tort law. State courts handle those inquiries, and remedies can be awarded for harm to reputation when the elements are met.
Obscenity and the Miller test
The three Miller prongs
Obscenity is excluded from First Amendment protection when material satisfies the Miller three part test: (1) whether the average person, applying contemporary community standards, would find that the work appeals to prurient interest; (2) whether the work depicts or describes sexual conduct in a patently offensive way; and (3) whether the work lacks serious literary, artistic, political, or scientific value. These criteria come from Miller v. California and remain the controlling framework for obscenity inquiries Miller v. California opinion.
Community standards and serious value
Because the first Miller prong uses local community standards, outcomes can vary by place and audience. The third prong requires courts to consider whether the material has any legitimate social value, which can keep many works from being labeled obscene even if they are explicit.
How obscenity differs from protected sexual expression
Not all sexual expression is obscene; material with redeeming literary, artistic, political, or scientific value can remain protected even if it is controversial or offensive to some readers. The Miller test is therefore applied carefully and with attention to context.
Fighting words and the Chaplinsky principle
Chaplinsky and the original fighting words doctrine
Chaplinsky identified a class of face to face abusive words that by their utterance inflict injury or tend to breach the peace, a category treated as outside First Amendment protection in that original decision Chaplinsky opinion. The idea was that some personally abusive words provoke immediate violence and can be limited to preserve public order.
How modern courts have narrowed the doctrine
Over time, courts have narrowed the fighting words doctrine, applying it in fewer cases and often requiring context showing a real risk of breach of the peace. Many kinds of offensive speech, including insults or harsh rhetoric, remain protected unless they meet the stricter factual tests courts now require.
When speech near ‘fighting words’ may still be protected
Speech that is abusive or insulting does not automatically lose protection. Courts look to whether the words were likely to produce immediate violence in the specific setting, and whether the exchange was truly face to face and provocative in the narrow sense Chaplinsky contemplated.
Applying these categories online and on social platforms
Why medium and context matter
The online medium can affect imminence, reach, and how a message is understood. A digital statement may reach far more people and be archived indefinitely, but those differences do not change the core legal tests; they change the facts judges assess when applying the tests.
Challenges in policing online threats and incitement
Courts have noted that online posts can complicate the imminence analysis because calls to action might lack the temporal or geographic immediacy that Brandenburg requires, and because mens rea can be harder to prove. Elonis demonstrated the Court’s attention to mental state in online threat cases, which influences how prosecutors and judges approach digital communications Elonis opinion. See recent discussion of incitement regulation in the internet era at NYU Incitement Regulation in the Internet Era.
How courts have treated online speech in recent rulings
Decisions emphasize fact specific inquiry and the need to examine context, audience, and the speaker’s intent. Platform moderation and private terms of service are separate from constitutional rules and can limit or remove speech even when the Constitution would not permit government restriction.
Decisions emphasize fact specific inquiry and the need to examine context, audience, and the speaker's intent. Platform moderation and private terms of service are separate from constitutional rules and can limit or remove speech even when the Constitution would not permit government restriction.
A practical framework to decide whether speech might be unprotected
Step 1: Identify the category
Start by asking which doctrinal bucket the communication might fit into: incitement, true threats, defamation, obscenity, or fighting words. Each category has distinct elements drawn from leading cases.
Step 2: Test for required elements
Apply the controlling test for that category. For incitement use the Brandenburg imminence and likelihood test Brandenburg v. Ohio opinion. For obscenity use Miller. For defamation note whether the plaintiff is a public figure and whether Sullivan requires actual malice. For threats and fighting words examine intent and immediacy as the cases require.
Step 3: Consider speaker status and medium
Assess whether the speaker is a public figure, whether the speech was online or face to face, and whether the context suggests real, near term danger. These contextual factors often determine whether a technically problematic statement becomes legally actionable.
Common mistakes and legal pitfalls to avoid
Mistaking unpopular speech for unprotected speech
One common error is treating offensive or hateful speech as categorically unprotected. Much offensive speech remains protected unless it meets the specific elements of a recognized exception identified by the courts.
Relying on platform enforcement as a legal determination
Platform takedowns do not equal illegality. Companies enforce their own terms. Those private actions operate independently of constitutional limits on government and do not determine whether speech is legally protected.
Assuming criminal liability without intent evidence
Criminal consequences usually require proof of mens rea or other statutory elements. Before assuming a statement will lead to prosecution, consider whether the government can show the required intent or likelihood elements under the controlling cases.
Practical, hypothetical scenarios readers can use to test protections
Hypothetical: political rally speech
Hypothetical: A speaker at a rally uses heated rhetoric urging protest but offers no plan or timing for illegal action. Under the Brandenburg framework that speech is likely protected because there is no clear intent to prompt imminent lawless action.
Hypothetical: an online post that threatens another person
Hypothetical: An online poster writes a message that could be read as a serious threat. Readers should apply the true threat standard and consider whether the post shows intent to threaten, whether a reasonable person would feel threatened, and whether mens rea evidence exists to support criminal liability.
Hypothetical: graphic sexual material distributed locally
Hypothetical: Graphic sexual content distributed in a way that a jury could find lacks serious literary or artistic value may meet the Miller prongs and be treated as obscene, but the community standards and the value prong make outcomes fact specific.
Consequences and remedies: criminal charges, civil suits, and platform actions
Criminal prosecution standards
When speech crosses into criminal territory, prosecutors pursue charges under statutes that typically require proof of intent or other elements beyond the words themselves. For incitement and threats, courts examine the relevant mens rea and imminence components before allowing criminal punishment.
Civil remedies for defamation
Defamation actions are civil claims brought by harmed parties who seek damages or corrections. The standards differ for public and private figures, which affects the plaintiff’s burden and the nature of the proof required in court New York Times Co. v. Sullivan opinion.
Role of platforms and private moderation
Platforms enforce their rules and may remove content for policy reasons even if the content is legally protected from government regulation. That distinction matters for users considering whether to challenge removals through policy processes or through legal channels.
Where to look for reliable primary sources and further reading
Official opinions and law libraries
Primary Supreme Court opinions are the authoritative sources for these legal tests. Start with the Court’s opinions in Brandenburg, Miller, Sullivan, Chaplinsky, and Elonis to read the holdings and the legal tests courts use Brandenburg v. Ohio opinion. You can also read the Brandenburg opinion on Justia Brandenburg v. Ohio – Justia.
Neutral overviews from civil liberties groups
Neutral summaries from civil liberties organizations provide accessible explanations of doctrines and typical applications. Those overviews help interpret the holdings and place them in modern context without substituting for primary law ACLU overview of First Amendment protections. For discussion of social media and moderation, see our freedom-of-expression-and-social-media analysis freedom of expression and social media.
How to read a case opinion
When reading an opinion, focus on the holding and the test the Court announces, then look at the facts the Court relied on. Distinguish holdings from dicta and use that analysis to compare your facts to the controlling precedent.
If you think you or someone else is at legal risk
When to seek a lawyer
Consult a licensed attorney for case specific questions and before taking any legal action. This article is explanatory and not a substitute for legal advice.
What information to collect before consulting counsel
Document dates, full texts or screenshots of the communication, context details, and any witnesses or records of related conduct. Clear documentation helps counsel evaluate whether speech may meet a recognized exception to protection.
Alternatives to litigation
Consider dispute resolution, public corrections, or platform appeal processes as alternatives to lawsuits. Those nonlegal responses can resolve some conflicts without court involvement.
Key takeaways
Summary of categories not protected
Five core categories courts treat as unprotected or subject to lesser protection are incitement, true threats, defamation, obscenity, and fighting words. Each category has a distinct test developed in Supreme Court opinions and refined over time.
Final cautions
Whether speech falls outside First Amendment protection depends heavily on context, the speaker’s intent or mental state, the medium used, and the speaker’s status. Fact specific analysis and judicial interpretation determine outcomes in most disputes.
Next steps for readers
To learn more, read the primary opinions and neutral overviews cited above, and consult a lawyer for a case specific determination if you believe a communication may be unlawful or expose you to legal risk. See the First Amendment explained First Amendment explained.
Speech is considered incitement when it is intended to and likely to produce imminent lawless action; both elements are required under the Brandenburg test.
No. Courts look at the speaker's intent and the context; online messages may be prosecuted as true threats if they show a serious intent to harm and the required mental state can be proven.
Yes. Platforms have their own terms of service and may remove content even when the Constitution would not permit government restriction.
This article aims to give readers clear guidance and references to follow up with primary law and reputable summaries.
References
- https://www.law.cornell.edu/wex/brandenburg_test
- https://michaelcarbonara.com/issue/constitutional-rights/
- https://www.law.cornell.edu/supremecourt/text/395/444
- https://www.law.cornell.edu/supremecourt/text/13-983
- https://www.law.cornell.edu/supremecourt/text/376/254
- https://www.law.cornell.edu/supremecourt/text/413/15
- https://www.law.cornell.edu/supremecourt/text/315/568
- https://michaelcarbonara.com/contact/
- https://proceedings.nyumootcourt.org/2022/12/the-inadequacy-of-brandenburgs-imminence-incitement-regulation-in-the-internet-era/
- https://supreme.justia.com/cases/federal/us/395/444/
- https://www.aclu.org/other/what-does-first-amendment-protect
- https://michaelcarbonara.com/freedom-of-expression-and-social-media/
- https://michaelcarbonara.com/first-amendment-explained-five-freedoms/

