How Regulators Are Balancing Free Speech and Safety Online

  • , par Paul Waite
  • 13 min temps de lecture

Governments across the UK, EU, US, and beyond are rapidly rewriting the rules of the online world to tackle hate speech, misinformation, and child safety—without crushing the freedom of expression that underpins democratic debate. Between 2021 and 2024, landmark laws like the UK’s Online Safety Act and the EU Digital Services Act have fundamentally shifted power and responsibility onto social media platforms, search services, and other online services. The tensions are real and immediate: from the swift removal of extremist content after the 2019 Christchurch attack, to heated COVID-19 misinformation debates during 2020–2022, to ongoing battles against post-2022 Ukraine war propaganda campaigns. This article examines how regulators are trying to balance rights and risks through specific legal tools, enforcement models, and oversight mechanisms—and what that means for users, platforms, and the future of online communication.

Global Legal Foundations: How Free Speech Is Protected and Limited Online

Different constitutional traditions shape how far regulators can push when regulating online speech. In the United States, the First Amendment creates robust protections against government censorship, while Section 230 of the 1996 Communications Decency Act shields digital platforms from most liability for user content. This combination makes it difficult for US lawmakers to impose the kind of sweeping content moderation duties seen elsewhere. Major debates from 2016 to 2024—including the Trump account bans, COVID-19 misinformation controversies, and “jawboning” cases like Missouri v. Biden where critics alleged government pressure on tech companies—illustrate the American legal landscape’s deep suspicion of state involvement in online content decisions.

The European model operates differently. Article 10 of the European Convention on Human Rights (ECHR) protects freedom of expression but explicitly allows restrictions that are “necessary in a democratic society”—for example, to protect the reputation of others, national security, or public safety. This framework gives European regulators more room to impose legal duties on platforms, provided those measures are proportionate and pursue legitimate aims.

International human rights standards increasingly guide expectations as well. The UN Guiding Principles on Business and Human Rights (2011) and subsequent UN OHCHR reports from 2018–2023 on online hate speech have pushed the idea that tech firms bear responsibility for respecting fundamental rights, even where national laws are silent. These principles now inform how regulators design their legal frameworks—and how platforms justify their content moderation practices to a global audience.

Key Regulatory Frameworks: From the UK Online Safety Act to the EU DSA

The years 2022–2024 witnessed the emergence of the first comprehensive “systems-regulation” laws for online speech, particularly in the UK and EU. These represent a new legislation approach: rather than targeting individual pieces of illegal content, regulators now require platforms to build proportionate systems that address risks at scale.

The UK Online Safety Act 2023

Passed in October 2023 with phased implementation led by Ofcom through consultations running into 2024–2025, the Online Safety Act applies to user-to-user services and search engines. The strongest duties fall on “Category 1” services—those with the largest UK user bases and highest-risk features.

Key requirements include:

  • Conducting risk assessments for illegal and harmful content

  • Implementing content moderation systems to remove illegal content swiftly

  • Special protections for children online, including age verification and shielding minors from harmful speech

  • Duties to protect users from content that could cause significant harm

Crucially, the Act explicitly tries to balance safety with free expression. It imposes duties on services to have regard to users’ rights to freedom of expression and privacy, protects content of democratic importance, and safeguards recognised news publisher content. The goal is proportionate, targeted measures—not blanket censorship.

The EU Digital Services Act

The Digital Services Act came into force for Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) on 25 August 2023, with full application across the EU from 17 February 2024. It introduces due-diligence obligations that go far beyond earlier European rules:

Requirement

Purpose

Annual risk assessments

Identify systemic risks from platform design and content

Mitigation plans

Address identified risks proportionately

Independent audits

Verify compliance with legal duties

Researcher data access

Enable independent study of platform harms

Transparency reports

Public accountability on content moderation

The DSA requires platforms to remove illegal content (such as incitement to violence under national laws) quickly once notified, while also mandating transparency and appeal rights for users whose content is removed. Platforms must explain their algorithmic recommender systems and apply terms of service consistently.

Comparative Approaches

Earlier European models focused more heavily on removal. Germany’s NetzDG (2017) required rapid takedowns of manifestly unlawful content, while France’s Avia Law was partially struck down by the Constitutional Council in 2020 for threatening legitimate speech. Australia’s Online Safety Act 2021 introduced “basic online safety expectations” and rapid takedown requirements, particularly for content involving children.

These varied approaches reflect ongoing experimentation with how to best protect individuals from harmful content while protecting freedom of expression online.

Who Is Responsible? Platforms, Users and New Enforcement Models

Regulators are moving from purely user-focused criminal law to systemic obligations that hold private companies accountable for how they design and operate their services. This shift toward “shared responsibility” changes who bears the burden of keeping online spaces safe.

Platform Duties and Penalties

Under the UK Online Safety Act, services face fines of up to 10% of global annual revenue for serious non-compliance. Ofcom can issue information notices, require changes to systems, and—in extreme cases—seek court orders to block access to non-compliant services in the UK.

The EU’s enforcement regime is similarly robust. The European Commission can fine VLOPs and VLOSEs up to 6% of worldwide turnover and require rapid changes to systems after risk assessments reveal problems. These aren’t theoretical powers: the DSA was designed to be fully implemented with real teeth.

Individual User Liability

Criminal law still applies to individuals posting illegal content. In the UK, the Communications Act 2003 (section 127) and the Malicious Communications Act 1988 make certain forms of online hate speech and harassment a criminal offence. Reform efforts are ongoing to ensure these provisions remain fit for the digital age.

Across EU member states, criminal provisions against incitement to violence or hatred—derived from the 2008 EU Framework Decision—are used to prosecute egregious online cases. The criminal threshold for prosecution remains high, but users can face serious harm through criminal activity charges.

Scaling Enforcement Without Over-Criminalising

Regulators are exploring new tools to scale enforcement while respecting privacy concerns and avoiding the criminalisation of minor infractions:

  • Administrative fines and notice-and-penalty systems for repeat abusers

  • Platform “three-strike” policies linked to identity verification, designed within data protection rules like GDPR

  • Requirements for platforms to report certain categories of serious harm to law enforcement

Institutional examples include Ofcom as the UK’s central regulator and National Digital Services Coordinators in each EU Member State working with the European Commission under the DSA. These other regulators coordinate across borders to ensure consistent application of rules to global platforms.

Safeguarding Free Expression: Guardrails Against Over-Removal

Regulators now explicitly build free speech safeguards into safety laws, reacting to well-founded fears of over-censorship and political abuse. The risk isn’t hypothetical: content moderation at a global scale inevitably catches legitimate views and legal content in its nets.

UK Online Safety Act Protections

The OSA requires services to have regard to users’ rights to freedom of expression and privacy, grounding these duties in the Human Rights Act 1998 and ECHR Article 10. Specific protections include:

  • Safeguards for content of democratic importance

  • Protections for recognised news publisher content

  • Requirements for proportionate, targeted moderation rather than over-broad filtering

  • Transparency about how moderation decisions are made

These provisions aim to ensure that platforms don’t sacrifice public discourse on the altar of safety.

DSA User Rights

The Digital Services Act takes a different but complementary approach:

  • Users must be told why content is removed or accounts are restricted

  • Accessible appeal mechanisms must be available

  • Terms and conditions must be applied consistently and without discrimination

  • Platforms must explain how algorithmic systems recommend content

These rules address raised concerns about hidden political bias in content moderation and give users recourse when they believe legitimate speech has been wrongly suppressed.

Over-Removal in Practice

The risks of over-moderation are well-documented. Between 2017 and 2022, YouTube and Facebook’s automated extremism filters mistakenly deleted Syrian human-rights documentation and war-crimes evidence—content of immense historical and legal importance. During COVID-19 debates (2020–2022), legitimate scientific discussion and political speech were sometimes caught in misinformation crackdowns, including content from credentialed researchers questioning emerging evidence.

Courts play a crucial role in checking over-reach. The European Court of Human Rights in Delfi AS v. Estonia (2015) balanced platform liability with user speech rights. National courts in Germany, France, and the UK continue to scrutinise whether new powers meet the proportionality requirements that human rights law demands. Best practice requires regulators to understand context and build in review mechanisms.

Technology at the Frontier: AI Moderation, Encryption and Decentralised Platforms

Regulators are not only writing primary legislation but also grappling with the technical realities of AI tools, encryption, and new network architectures that don’t fit neatly into traditional regulatory frameworks.

AI-Driven Content Moderation

Platforms like Facebook, X (formerly Twitter), YouTube, and TikTok deploy AI at massive scale to scan billions of posts, images, and videos. This automation is essential—no human workforce could review user content at this volume—but it creates significant ethical challenges:

Risk

Example

False positives

Legitimate news flagged as violence

Language bias

Non-English content moderated less accurately

Cultural blindness

Satire and context lost on automated systems

Lack of transparency

Users can’t understand why content was removed

The EU’s DSA and upcoming AI Act (political agreement reached December 2023, expected phased application from 2025) demand transparency and human oversight for high-risk AI systems. Platforms must explain how automated moderation works and provide human review for significant changes impacting users.

The Encryption Debate

End-to-end encryption—used by WhatsApp, Signal, and others—protects data privacy but creates genuine challenges for detecting illegal content like child sexual abuse material. The UK’s Investigatory Powers Act 2016 and 2023–2024 discussions about updating powers sparked controversy.

The Online Safety Act included provisions that raised concerns from providers about potential “backdoors” that could undermine encryption. In late 2023, the UK government signalled it would not force proactive weakening of encryption while reserving powers to seek access in specific, legally authorised cases. This uneasy compromise reflects the tension between privacy concerns and the need to protect individuals—particularly children—from serious harm.

Decentralised and Federated Platforms

The rise of Mastodon, Bluesky, Matrix, and the broader ActivityPub ecosystem after 2022 reflects users seeking alternatives to centralised control. These platforms present regulatory challenges:

  • No single corporate entity to hold accountable

  • Fragmented moderation norms across instances

  • Harder enforcement of national rules across borders

Regulators are exploring obligations that attach to “service providers” or “admins” of large instances while respecting the grassroots nature of these communities. The focus is shifting toward interoperability-friendly rules rather than purely targeting individual tech companies.

The Road Ahead: Towards a Sustainable Balance Between Rights and Safety

From 2024 onward, the debate is shifting from whether to regulate online platforms to how to calibrate and correct these new regimes. The online safety bill frameworks now in place will only succeed if they evolve based on evidence and remain accountable to the publics they serve.

Effective balance will depend on several factors:

  • Transparent data about takedowns, appeals, and error rates from platforms

  • Civil society involvement in Ofcom and EU consultations, including minorities targeted by online hate speech, journalists, and children’s advocates

  • Ongoing review of whether rules achieve their goals without chilling free speech online

Upcoming Milestones

Key dates to watch include:

  • Ofcom’s multi-stage codes of practice for the UK Online Safety Act rolling out through 2024–2025 before full enforcement

  • The EU’s first full annual DSA risk-assessment and audit cycle for VLOPs/VLOSEs from 2024–2025

  • Crisis-response rules being tested after events like elections or conflicts

  • Report stage reviews of whether significant changes are needed to the regulatory framework

Remaining Fault Lines

Several hard questions remain unresolved:

  • How to handle “lawful but awful” content—harmful speech that doesn’t cross the criminal threshold but causes real damage

  • Long-term governance for AI moderation, including independent audits and cross-border cooperation

  • Protecting children online without eroding privacy or blocking access to legitimate educational resources

  • Ensuring that online free speech protections extend to marginalised communities whose views may be unpopular

The balance between rights and safety will be continuously renegotiated as technology and politics evolve. Regulators, social media companies, and users share joint responsibility for keeping digital platforms both open and safe. No regulatory framework will be perfect—the goal is to build systems that learn, adapt, and remain accountable to the democratic importance of free expression while taking seriously the real harms that unregulated online spaces can inflict.

Getting this balance right matters enormously. The online world is now inseparable from civic life, economic opportunity, and personal identity. The legal frameworks taking shape today will determine whether the internet remains a space for legitimate speech, creativity, and connection—or becomes fragmented by over-censorship and distrust. Staying informed, engaging with consultations, and holding both platforms and regulators accountable is how citizens can help shape this future.

Laissez un commentaire

Laissez un commentaire


Connexion

Vous avez oublié votre mot de passe ?

Vous n'avez pas encore de compte ?
Créer un compte