Social Media Regulation

  • , di Paul Waite
  • 22 tempo di lettura minimo

The rules governing what happens on social media are changing faster than ever. Between the UK’s Online Safety Act receiving Royal Assent in October 2023, the EU’s Digital Services Act (DSA) fully applying from February 2024, and over 250 state bills introduced across the US since 2021, platforms and users alike face a new reality. This article breaks down what social media regulation actually does, who it covers, how it’s enforced, and why children’s safety, algorithms, and misinformation have become central battlegrounds.

Overview of social media regulation in 2024–2026

Social media regulation matters now because the harms have become impossible to ignore. In recent years, 41% of US teens have reported experiencing cyberbullying, and research links increased social media use to rising mental health concerns among young people. Governments worldwide have concluded that self-regulation by platforms isn’t delivering the results society needs.

The regulatory landscape in 2024–2026 revolves around several key themes:

  • User safety: Protecting users from illegal content, harassment, and exploitation

  • Free speech: Balancing harm reduction with the ability to express opinions

  • Platform power: Holding social media companies accountable for the systems they design

  • Cross-border enforcement: Making global platforms comply with national laws

What’s changed is the shift from voluntary platform self-regulation to statutory duties enforced by independent regulators. In the UK, Ofcom now oversees online safety. The European Commission enforces DSA obligations against very large online platforms (VLOPs). In the US, the FTC and state attorneys general play increasingly active roles, even without comprehensive federal legislation.

This article will walk you through what these regulations aim to achieve, the specific frameworks in the UK, EU, and US, who they apply to, what obligations platforms must meet, and where the law is heading next. Whether you’re in compliance, policy, or simply trying to understand how the internet is governed, these developments matter.

What social media regulation aims to achieve

At its core, social media regulation pursues several interconnected goals: reducing illegal and harmful content, protecting vulnerable users (especially children), improving transparency about how platforms operate, and maintaining space for innovation and free expression. The challenge lies in achieving all of these simultaneously.

Regulators across jurisdictions focus on a common set of concrete harms:

  • Cyberbullying and online harassment

  • Image-based abuse (including intimate images shared without consent)

  • Terrorist content and violent extremism

  • Child sexual abuse material (CSAM)

  • Self-harm and suicide promotion

  • Coordinated disinformation campaigns

  • Online pornography accessible to minors

What distinguishes modern regulation from earlier approaches is the focus on systemic risks, not just individual pieces of content. Regulators now ask how recommendation algorithms might amplify harmful material at scale, rather than simply requiring takedowns after the fact. This represents a fundamental shift in how governments think about platform accountability.

It’s worth understanding the distinction between “illegal content” and “legal but harmful” content. Illegal content includes material that violates criminal law—incitement to violence, CSAM, terrorist propaganda. “Legal but harmful” content might include pro-anorexia material or extreme misogyny that, while not criminal, can cause real damage. Different jurisdictions treat these categories differently. The UK initially proposed broad duties around legal but harmful content for adults but later narrowed this approach following concerns about free speech.

Key frameworks: UK, EU, and US approaches

Three major regulatory models now shape how social media companies operate globally: the UK’s Online Safety Act, the EU’s Digital Services Act and related laws, and the more fragmented approach in the United States. Each reflects different legal traditions, political priorities, and views on the proper relationship between government and platforms.

The UK Online Safety Act

The Online Safety Bill became law when it received Royal Assent on 26 October 2023, transforming into the Online Safety Act. Implementation is phased across 2024–2026, with Ofcom leading as the independent regulator. The Act imposes a “duty of care” on platforms, requiring them to take proactive steps to protect users from illegal content and, for services likely to be accessed by children, from content harmful to minors.

Key UK obligations include:

  • Risk assessments for illegal content (due 2025) and child safety assessments (by July 2025)

  • Robust content moderation systems

  • User reporting and redress mechanisms

  • Age verification for platforms hosting online pornography

  • Transparency reporting to Ofcom

The Act also creates new criminal offences, including cyberflashing and threatening communications, which came into force on 31 January 2024.

The EU Digital Services Act

The DSA represents the most comprehensive harmonised framework for regulating online platforms across a major economic bloc. While obligations for VLOPs (platforms with 45+ million monthly EU users) took effect in 2023, the full DSA applied to all in-scope intermediaries from 17 February 2024.

Key EU obligations include:

  • Notice-and-action mechanisms for illegal content

  • Transparency reports on content moderation

  • Risk assessments for systemic harms (VLOPs)

  • Algorithmic audits and researcher data access (VLOPs)

  • Clear terms of service and complaint handling

The DSA takes a tiered approach: the bigger and riskier the platform, the more demanding the requirements. VLOPs like Meta, TikTok, and YouTube face the strictest scrutiny, including potential fines of up to 6% of global annual turnover.

The United States approach

The US lacks a single federal law governing social media in the way the UK and EU now have. Section 230 of the Communications Decency Act, enacted in 1996, gives platforms broad immunity from liability for user-generated content—a protection that has enabled the growth of the modern internet but now faces criticism from both political parties.

Current US regulatory activity includes:

  • Over 250 state bills introduced since 2021 across 38 states

  • The Kids Online Safety Act (KOSA) advancing with bipartisan support

  • California’s Age-Appropriate Design Code (CAADCA), though facing court challenges

  • Proposals to narrow or reform Section 230

The constitutional protection for free speech under the First Amendment complicates federal regulation, with courts striking down several state laws as unconstitutional. The result is a patchwork of laws and ongoing uncertainty for platforms operating nationally.

Who social media regulation applies to

Laws typically cover “online platforms” that host user-generated content or enable user interaction—a definition that goes well beyond traditional social networks. Understanding who falls within scope is essential for compliance.

Services typically covered include:

Service Type

Examples

Social networks

Facebook, X (formerly Twitter), Threads, LinkedIn

Video-sharing platforms

YouTube, TikTok, Vimeo

Messaging and chat apps

WhatsApp, Telegram, Snapchat

Dating apps

Tinder, Bumble, Grindr

Gaming platforms with social features

Discord, Twitch, Roblox

Forums and community sites

Reddit, specialised interest forums

The concept of “relevant links” to a jurisdiction determines whether a platform must comply with a country’s laws. A US-based platform with millions of UK users falls within the Online Safety Act and Ofcom’s oversight, regardless of where the company is headquartered.

Both the UK and EU use size, functionality, and risk criteria to categorise services:

  • UK: Category 1 services (highest reach) face the most extensive duties

  • EU: VLOPs and VLOSEs (very large online search engines) with 45+ million monthly EU users face enhanced obligations

Even small services can be caught by regulation if they present particular risks. A niche forum popular with minors that hosts extremist or pornographic content would face scrutiny despite its size. The focus is on potential harm, not just user numbers.

Core obligations on social media platforms

Modern regulation shifts platforms from a passive “host” model—where they simply provided infrastructure—to active risk management. However, most frameworks stop short of requiring general pre-publication monitoring, which would be impractical and raise serious concerns about censorship.

High-level duties that platforms typically face include:

  • Conducting regular risk assessments for illegal and harmful content

  • Implementing proportionate safety measures based on those assessments

  • Providing reporting and redress tools for users

  • Publishing transparency reports

  • Cooperating with regulators and responding to information requests

These obligations differ depending on user groups. Platforms must take stronger, more proactive measures to protect children, while for adults the emphasis shifts toward user choice and empowerment tools.

Under the UK Online Safety Act:

  • Illegal content risk assessments due in 2025

  • Child safety assessments due by July 2025

  • Codes of practice from Ofcom guiding implementation through 2024–2026

Under the EU DSA:

  • Annual transparency reports for VLOPs/VLOSEs from 2023–2024 onwards

  • Systemic risk assessments and mitigation measures

  • Independent audits of compliance

Both frameworks require clear terms of service, consistent enforcement of those terms, and accessible complaint mechanisms where users can challenge moderation decisions.

Illegal content duties

“Illegal content” under these frameworks includes material that violates criminal law. Concrete examples include:

  • Terrorism and violent extremism

  • Child sexual abuse material (CSAM)

  • Serious hate crime

  • Threats and harassment

  • Intimate image abuse (including revenge pornography)

  • Fraud and certain types of foreign interference

Platforms must act “expeditiously” to remove illegal content once they have actual knowledge of it. The EU DSA establishes notice-and-action rules requiring platforms to acknowledge reports, investigate, and respond. The UK Online Safety Act imposes proactive duties for certain high-risk content categories.

For terrorism and CSAM specifically, platforms are expected to deploy automated detection tools and maintain dedicated moderation teams. The urgency here reflects the severity of harm—Australia’s eSafety Commissioner, for instance, requires removal of certain harmful content within 24 hours.

The UK has also introduced new criminal offences:

  • Cyberflashing (sending unsolicited sexual images)

  • Threatening communications

These came into force on 31 January 2024, meaning individuals can now be prosecuted directly, not just platforms.

Search services have their own obligations: they must downrank or delist illegal results and implement safe search options where relevant.

Content harmful to children

Children’s online safety is the most politically salient driver of new regulation. The UK, EU, and multiple US states have all prioritised this area, reflecting widespread concern among parents, educators, and lawmakers.

Categories of harmful content that regulations typically address include:

  • Online pornography and age inappropriate content

  • Extreme violence and gore

  • Self-harm and suicide content

  • Eating disorder promotion material

  • Drug promotion

  • Grooming and inappropriate contact

The UK Online Safety Act distinguishes between “primary priority” content (the most serious harms) and “priority” content (significant but less severe). Services likely to be accessed by children must prevent exposure to such content or provide age-appropriate experiences.

Regulators expect platforms to deploy specific tools:

Tool

Purpose

Age assurance/verification

Preventing minors from accessing adult content

Default safety settings

Protecting children by default rather than opt-in

High-privacy profiles

Limiting discoverability and contact from strangers

Parental controls

Giving parents more control over children’s online activity

The UK has set explicit timelines for online pornography sites to implement age verification, with deadlines around early 2025. Websites that fail to restrict content appropriately could face enforcement action, including potential blocking.

Protecting children isn’t solely a platform responsibility. Parents, schools, and child-support organisations all play roles. Regulation works best when it enables these stakeholders to engage effectively with the tools and resources platforms provide.

Adult user controls and empowerment

For adults, many laws prioritise user choice and transparency over outright bans on legal content. The principle is that adults should have the ability to curate their own online environment, armed with knowledge about how platforms work.

Specific mechanisms that platforms must or should provide include:

  • Content filters for sensitive topics (violence, self-harm, hate speech)

  • Controls over who can contact or message the user

  • Options to limit interactions to verified accounts

  • Granular settings for comments and replies

  • Ability to see chronological feeds rather than algorithmic recommendations

Under the UK Online Safety Act, Category 1 services (the largest platforms) must provide enhanced control tools for adults. These services must also give users clear, accessible explanations of how their recommendation algorithms work at a high level.

Identity verification represents one approach to reducing anonymous abuse. Adults can choose to verify their identity, and platforms can enable users to limit interactions to verified accounts. However, verification must remain voluntary and privacy-preserving—mandating it would raise serious concerns about surveillance and access.

There’s an inherent tension here. Safety tools that restrict content or block users can, if poorly designed, impact free speech and lead to over-moderation. Platforms must navigate a delicate balance between empowering users and avoiding censorship of legitimate expression.

Regulation of algorithms and recommender systems

Recent laws increasingly recognise that the most significant harms don’t come from individual pieces of content but from systems that amplify harmful material at scale. Recommendation algorithms—the code that decides what appears in your feed—are now a central regulatory concern.

Both the UK Online Safety Act and EU DSA require platforms to assess how their recommendation systems might amplify:

  • Illegal content

  • Suicide and self-harm material

  • Extremist content and violent radicalisation

  • Disinformation and manipulated media

Mitigation measures that regulators expect include:

  • Changing default recommendations for minors to exclude harmful content

  • Limiting auto-play features that encourage extended viewing

  • Addressing “rabbit hole” design patterns that progressively serve more extreme content

  • Providing user controls to switch off personalised recommendations

Transparency obligations are substantial. VLOPs under the DSA must publish detailed annual transparency reports explaining how their systems work. Ofcom is consulting on algorithmic safety and transparency through 2024–2026, with codes of practice expected to provide more specific guidance.

One of the most significant developments is the requirement for platforms to provide data access to vetted researchers studying systemic risks. This enables independent analysis of how algorithms function in practice—though it must be balanced against trade secrets and user privacy.

Enforcement, penalties, and cross-border reach

Powerful enforcement tools are essential for making regulation credible with global tech firms that have resources to outspend many governments. Modern frameworks give regulators significant teeth.

Main enforcement instruments include:

  • Investigations and information requests

  • Independent audits (mandatory for VLOPs under DSA)

  • Binding orders to change systems or practices

  • Substantial administrative fines

  • In extreme cases, blocking access via ISPs or payment providers

The financial penalties are designed to be meaningful even for the largest companies:

Jurisdiction

Maximum Fine

UK (Online Safety Act)

£18 million or 10% of global annual revenue (whichever is higher)

EU (Digital Services Act)

6% of global annual turnover

Australia (Online Safety Act)

AUD 555,000 per individual or 10% of Australian turnover

These numbers aren’t theoretical. Meta paid €1.2 billion under GDPR in 2023 for data transfers to the US. TikTok was fined €345 million in 2023 for child privacy breaches. Total EU tech fines have exceeded €5 billion since 2018.

Cross-border enforcement is explicit in these frameworks. Ofcom can act against US-based platforms serving large UK user bases. The European Commission can investigate any VLOP serving EU users, regardless of where the company is headquartered. In cases of serious, persistent non-compliance, measures can extend to blocking access—though this remains a last resort.

The UK has introduced potential personal liability for senior managers who fail to comply with certain enforcement notices. This is designed to strengthen accountability at board level and ensure that compliance is a C-suite priority.

Role of regulators and advisory bodies

Several key regulators now oversee social media compliance:

Jurisdiction

Primary Regulator(s)

UK

Ofcom

EU

European Commission + national Digital Services Coordinators

US

FTC, state attorneys general (no single federal regulator)

Australia

eSafety Commissioner

Specialised advisory bodies are being established to provide expert input:

  • Ofcom’s advisory committee on disinformation and misinformation (meetings planned from April 2025)

  • Expert groups on violence against women and girls

  • Researcher access panels for DSA data provisions

Regulators are producing detailed codes of practice and guidance documents. Ofcom has published an Online Safety Act implementation roadmap with consultations running through 2024–2026. The European Commission has issued guidance on DSA compliance and is actively monitoring VLOP adherence.

Early enforcement actions under the DSA include investigations into X (formerly Twitter) regarding its handling of illegal content and disinformation. These cases will set important precedents for how aggressively regulators pursue non-compliance.

Specific focus areas: women, girls, and vulnerable groups

Regulators increasingly recognise that certain groups experience disproportionate online abuse and harm. Women, girls, ethnic minorities, LGBTQ+ users, and disabled people face targeted harassment at higher rates than the general population.

The UK Online Safety Act specifically prioritises tackling:

  • Intimate image abuse (including revenge pornography)

  • Cyberflashing

  • Online stalking

  • Harassment that disproportionately affects women and girls

New offences and quicker takedown requirements reflect this priority. The government published guidance working with the Victims’ Commissioner and Domestic Abuse Commissioner to ensure these provisions are effective.

Platform responsibilities for vulnerable groups include:

  • Rapid removal of reported intimate image abuse

  • Clear, accessible reporting routes

  • Specialist moderation teams trained on gender-based violence

  • Partnerships with NGOs and support organisations

Laws increasingly require platforms to conduct impact assessments considering specific groups—not just the “average” user. This means asking how moderation policies, algorithm design, and safety features affect women, minorities, and other vulnerable communities differently.

Misinformation, disinformation, and democratic integrity

Large elections in 2024–2025—including the US presidential election, UK general election, and EU Parliament elections—amplified concerns about online misinformation and foreign interference. Platforms faced intense scrutiny over their handling of false claims, manipulated media, and coordinated influence campaigns.

Most democratic jurisdictions are cautious about regulating “truth” directly. The concerns about government determining what’s true are obvious. Instead, the focus falls on:

  • Illegal content (e.g., foreign interference offences, electoral fraud, incitement)

  • Transparency around political advertising (who paid, how much, who was targeted)

  • Labelling manipulated media and AI-generated content

  • Cooperation with election authorities during campaign periods

The EU DSA requires VLOPs to assess risks to electoral integrity and take mitigation measures. The UK is establishing an advisory committee on disinformation and misinformation through Ofcom, with meetings planned from April 2025.

There’s genuine tension here. Removing harmful misinformation protects democratic processes, but aggressive content removal can fuel accusations of censorship or partisan bias. Platforms and regulators must navigate this carefully, with transparency and consistent standards being essential to maintaining trust.

Economic and law-and-economics perspectives on social media regulation

Economic analysis provides useful insights into when regulation is likely to improve welfare versus when markets and user choice might work better. Social media platforms are multisided markets, balancing users who want engaging content, advertisers who pay for attention, and creators who provide material. Heavy-handed rules risk disrupting this balance in ways that harm all parties.

The “least-cost avoider” principle asks: who can most efficiently prevent harm? For some problems, platforms are clearly the right subject of regulation:

  • CSAM detection (platforms can deploy AI with 95% accuracy)

  • Terrorist content removal (requires scale and expertise)

  • Systemic algorithm risks (only platforms can modify their systems)

For other issues, different approaches might be more effective:

  • Screen time management (parental controls, education)

  • Exposure to challenging ideas (user tools, media literacy)

  • Adult content preferences (user choice, not blanket bans)

A key concern is “collateral censorship.” If liability rules are too strict, platforms may remove borderline lawful content to avoid risk. This chilling effect reduces diversity of speech and can silence legitimate voices—particularly smaller creators and controversial but legal viewpoints.

The case for robust platform duties is strongest where harms are severe (children’s safety, serious crime) and platforms are uniquely positioned to act. The case for lighter-touch approaches is stronger for adult speech, matters of opinion, and areas where user empowerment can work. Regulation should be proportionate, targeting the most serious harms while preserving space for innovation and expression.

Future directions and policy debates

Social media regulation will continue evolving through 2025–2030, driven by technological change and accumulated experience with current frameworks. Several developments will shape the landscape:

AI-generated content and deepfakes: As synthetic media becomes more convincing and accessible, platforms and regulators will need new tools to identify and label AI-generated material. The EU AI Act, which classifies high-risk AI systems including social media recommender systems, represents an early framework.

Immersive environments: AR/VR platforms like Meta’s Quest and emerging metaverse services create new challenges for content moderation in three-dimensional, real-time environments.

End-to-end encryption: Tensions between privacy (encrypted messaging) and law enforcement access (detecting illegal content) remain unresolved. Businesses and civil society are watching how governments approach this delicate balance.

Key debates ahead include:

  • Whether to reform or replace Section 230 in the US

  • How far to push age verification without creating surveillance infrastructure

  • Whether to create specialised digital regulators in more countries

  • How to handle decentralised platforms that resist traditional enforcement

Concrete upcoming milestones include:

  • Reviews of UK Online Safety Act implementation (ongoing through 2026)

  • EU DSA evaluation clauses triggering formal reviews

  • Expected new national and state laws on children’s online safety

  • Potential US federal legislation if political conditions align

The trade-offs are clear: harm reduction versus free speech, innovation versus compliance costs, national sovereignty versus global platforms. Getting regulation right requires evidence-based approaches that draw on data from regulators, independent research, and civil society—not purely reactive law-making after high-profile incidents.

Key takeaways

Social media regulation has entered a new era. Here’s what matters most:

  • The UK Online Safety Act and EU Digital Services Act represent the most comprehensive frameworks globally, with significant obligations now in force

  • Children’s safety is the central political driver, with robust duties on platforms to protect children from harmful content

  • Algorithms are now regulatory targets, not just individual pieces of content

  • Enforcement carries real teeth—fines up to 10% of global revenue and personal liability for executives

  • The US remains fragmented but is moving toward federal action on children’s online safety

  • Free speech concerns require careful balancing against safety measures

  • Compliance costs are substantial, running into billions annually for major platforms

Whether you’re a compliance professional, policy analyst, parent, or simply an engaged user, understanding these frameworks is essential. The rules governing what social media companies must do—and what users can expect—are being written now.

Stay informed by following regulatory updates from Ofcom, the European Commission, and relevant national authorities. If you’re operating a platform or service, engage with consultation processes while you still can shape the guidance. And if you’re a user concerned about online harms or about overreach, make your voice heard—these regulations will affect all of us for years to come.


Login

Hai dimenticato la password?

Non hai ancora un conto?
Creare un profilo