The aim is to give voters, journalists and civic readers a clear, sourced guide to how legal limits work in practice, and to point to primary sources and monitoring reports for further verification.
What freedom of expression and social media means: definition and scope
Free expression in law and online platforms
Freedom of expression is a fundamental human right that covers opinions, ideas and information, whether shared in speech, writing or online. According to the UN Human Rights Committee General Comment No. 34, restrictions on expression are not forbidden, but they must meet a clear three-part test: be prescribed by law, pursue a legitimate aim, and be necessary and proportionate, and that framework remains the baseline for rights review in 2026 UN Human Rights Committee General Comment No. 34.
On social media, the same kinds of expression travel faster and reach larger audiences than in most traditional media. The scale and speed of platform distribution, and the effects of algorithmic amplification, change how risks such as incitement, coordinated disinformation or reputational harms play out in practice.
Why social media raises distinct questions
Social media raises distinct legal and policy questions because platforms combine user content, private moderation policies and automated ranking systems, and because users can cross national borders with a single post. Monitoring reports find substantial variation in how platforms apply policies for hate speech, disinformation and threats to public safety, and they note transparency gaps in algorithmic enforcement Freedom on the Net 2024: The Global Drive to Control Big Tech and Online Expression.
Limitations on speech therefore operate at two levels: state law and private-platform rules. International human-rights guidance treats legal limits as strictly bounded, not as open permission for arbitrary removals, and it stresses clear legal tests when states act. That distinction matters for voters, moderators and users trying to understand whether a restriction is lawful or merely policy-based.
Join Michael Carbonara's campaign for transparent governance
Consult primary human-rights guidance and recent monitoring reports when assessing how limits are applied online; they clarify legal tests and common transparency concerns without prescribing specific platform policies.
The three-part human-rights test for valid restrictions
1) Prescribed by law
International guidance makes clear that any measure restricting expression must have a legal basis that is accessible and foreseeable. Restrictions that are vague or overly broad risk being struck down or criticized by rights bodies for failing the lawfulness requirement, according to General Comment No. 34 UN Human Rights Committee General Comment No. 34.
2) Legitimate aim
States may only limit expression for purposes recognized as legitimate under human-rights law, such as protecting public order, national security, public health, the rights of others, or the prevention of hate speech and discrimination. The requirement that a restriction pursue one of these aims helps distinguish lawful regulation from content control for political ends.
3) Necessary and proportionate
Necessity and proportionality are the two-part test that often determines whether a restriction is permissible. A measure is necessary if a less restrictive option would not achieve the same legitimate aim, and it is proportionate if its benefits outweigh the harm to expression. Rights bodies apply this balancing test when reviewing legal limits and state actions UN Human Rights Committee General Comment No. 34.
The practical consequence is that laws must be narrowly tailored. A broadly worded statute that criminalizes vague categories of speech is likely to fail the proportionality test in international review.
How national systems differ: U.S. incitement doctrine versus European proportionality
Brandenburg and the U.S. standard for incitement
In the United States, criminal punishment for speech is constrained by the Supreme Court’s Brandenburg standard, which requires intent to incite, imminence of lawless action and a likelihood that the speech will produce such action; that test sets a high bar for state prosecution of speech for incitement Brandenburg v. Ohio.
Any restriction must be provided by law, pursue a legitimate aim, and be necessary and proportionate to that aim; in practice, national and platform rules interpret those limits differently, especially on social media.
Article 10 ECHR and proportionality balancing in Europe
In Europe, Article 10 of the European Convention on Human Rights is interpreted through a proportionality analysis in which national restrictions are weighed against the public interest and the importance of the expression. Council of Europe guidance and case law show that the same speech can lead to different outcomes in Europe than under U.S. incitement doctrine, because proportionality allows narrower or wider limits depending on context Freedom of expression (Article 10) – Council of Europe overview and guidance.
Practical differences for social-media content
These jurisdictional differences also affect how moderators and automated systems are calibrated, including how platforms assess imminence or likelihood for potential harms.
For social-media content, the practical implication is that identical posts may be lawful in one jurisdiction and actionable in another. Platforms that operate across borders must navigate this patchwork, creating enforcement choices that reflect different legal thresholds and local laws.
These jurisdictional differences also affect how moderators and automated systems are calibrated, including how platforms assess imminence or likelihood for potential harms.
Hate speech, defamation and criminalization: thresholds and guidance
Rabat Plan of Action and high thresholds for criminal law
The Rabat Plan of Action emphasizes a high threshold before criminalizing speech, urging context-sensitive tests to avoid overly broad punishment of expression. It recommends careful assessment of intent, content and context to distinguish protected expression from criminal incitement Rabat Plan of Action on the prohibition of advocacy of national, racial or religious hatred.
When defamation or hate-speech laws are applied
Defamation claims and civil remedies address reputational harms with different burdens and standards than criminal law. Civil law often focuses on balancing reputation and truth, while criminal hate-speech provisions carry higher risks for punishment and therefore trigger stricter guidance from human-rights bodies.
Risks of overbroad criminalization
Monitoring reports and legal reviews note risks when states or platforms apply broad categories to remove speech or pursue criminal penalties without clear thresholds. These concerns include chilling effects on legitimate debate and the uneven application of rules, which monitoring bodies have documented in recent years Freedom on the Net 2024: The Global Drive to Control Big Tech and Online Expression.
National security, privacy and public-order exceptions: limits and legal tests
How states justify restrictions on security or privacy grounds
States commonly invoke national security, privacy or public order to justify limits on expression. These aims can be legitimate, but human-rights guidance treats such justifications as subject to strict necessity and proportionality review, especially where secrecy or wide discretion is invoked UN Human Rights Committee General Comment No. 34.
Necessity, proportionality and available remedies
When governments restrict speech for security or privacy reasons, courts and review bodies often require evidence that a narrowly tailored measure was needed and that less intrusive steps were considered. Evaluations of platform measures in the EU have also stressed the need for remedies and transparency when private or public actors remove or limit content Third evaluation of the Code of Practice on Disinformation.
Contested cases and policy reviews
Recent case law and policy reviews show that claims framed as national security or privacy can be contested, with courts sometimes pushing back where the state has not demonstrated proportionality or where secrecy prevents meaningful review.
Content moderation on social media: platform policies, transparency and algorithmic enforcement
What monitoring reports say about platforms’ practices
Independent monitoring and evaluations find that platforms and states apply diverse moderation policies for hate speech, disinformation and threats to public safety, and that enforcement and transparency differ across platforms and jurisdictions Freedom on the Net 2024: The Global Drive to Control Big Tech and Online Expression.
Transparency gaps and algorithmic moderation
Evaluations by the European Commission and civil-society groups highlight gaps in transparency, especially around algorithmic enforcement and decision logs. Those gaps complicate users’ ability to contest removals or understand why content is de-amplified or removed Third evaluation of the Code of Practice on Disinformation.
Cross-border enforcement challenges
Cross-border enforcement remains an open problem: platforms must respect local laws while maintaining consistent terms of service, and differences in legal thresholds for incitement, hate speech or privacy produce complex operational dilemmas for moderation.
How to evaluate whether a restriction is lawful: a practical decision framework
This checklist helps readers assess whether a social-media restriction is likely lawful under international standards and common regional approaches.
Quick evaluation checklist for content restrictions
Use primary sources for verification
Step 1 is to check for a clear legal basis: is the restriction grounded in an accessible law or regulation, or is it a platform policy without statutory backing? If the restriction lacks a legal basis, it may fail the first limb of the international test.
Step 2 asks whether the stated aim is one recognized under human-rights law, such as protecting public order, preventing hate speech, or safeguarding privacy. If the aim is vague or political, that is a red flag.
Step 3 considers necessity and proportionality: could a less restrictive measure achieve the same aim and is the harm to expression justified by the benefit claimed? Documentation, transparent policy reasoning and remedial channels strengthen the proportionality case, while secrecy and broad language weaken it Freedom of expression (Article 10) – Council of Europe overview and guidance.
Monitoring reports are useful sources when assessing whether enforcement is consistent or selective.
Common mistakes, remedies and a brief conclusion
Typical errors by lawmakers, platforms and users
A common error is drafting laws or policies with broad or vague categories that capture a wide range of expression. Another is relying on automated systems without clear human review or transparent criteria, which can produce inconsistent results and chilling effects on debate.
Available remedies and calls for transparency
Human-rights guidance and monitoring bodies recommend narrowing criminal thresholds, publishing enforcement criteria, providing meaningful appeals and improving transparency about algorithmic decisions. These remedies aim to reduce unjustified restrictions while enabling legitimate protections.
Concise takeaway
The three central limitations on freedom of expression are that any restriction must be prescribed by law, pursue a legitimate aim, and be necessary and proportionate; international guidance and regional law apply these tests differently, with practical consequences for social-media moderation and cross-border enforcement UN Human Rights Committee General Comment No. 34.
It requires that any restriction be prescribed by law, pursue a legitimate aim, and be necessary and proportionate to that aim.
U.S. law uses an imminence and likelihood test for incitement, while European law applies proportionality balancing under Article 10, which can lead to different enforcement outcomes.
Check the platform's published criteria and appeal options, document the removal, and consult monitoring reports or primary legal sources to assess whether the restriction aligns with recognized legal tests.
References
- https://www.ohchr.org/en/documents/general-comments-and-recommendations/general-comment-no-34-freedom-expression-article-19-covenant
- https://freedomhouse.org/report/freedom-net/2024
- https://supreme.justia.com/cases/federal/us/395/444/
- https://www.coe.int/en/web/freedom-expression
- https://www.ohchr.org/sites/default/files/Documents/Issues/Expression/Rabat_plan_of_action.pdf
- https://digital-strategy.ec.europa.eu/en/library/third-evaluation-code-practice-disinformation-2024
- https://michaelcarbonara.com/contact/
- https://www.yalejreg.com/bulletin/applying-international-human-rights-law-for-use-by-facebook/
- https://fra.europa.eu/en/eu-charter/article/11-freedom-expression-and-information
- https://promiseinstitute.law.ucla.edu/wp-content/uploads/2022/05/Social-Media-Content-Moderation-and-Internationals-Human-RIghts-Law.pdf
- https://michaelcarbonara.com/issue/constitutional-rights/
- https://michaelcarbonara.com/about/
- https://michaelcarbonara.com/news/
