Online Harms Legislation

  • , par Paul Waite
  • 21 min temps de lecture

Governments around the world are racing to address the dark side of the digital age. Laws like the Kids Online Safety Act (KOSA) in the United States and the Online Harms Act (Bill C-63) in Canada represent a new generation of regulation aimed at reducing harmful content and dangerous platform design online. This article breaks down these major initiatives, explains their core provisions, and examines the intense debates over free expression, privacy, and platform accountability that surround them.

Introduction to Online Harms Legislation

The push to regulate online harms has reached a critical inflection point. Policymakers in the United States, Canada, the United Kingdom, and beyond are grappling with evidence that social media platforms can amplify hate speech, violent extremism, non-consensual intimate content, and materials promoting self harm—all while using addictive design features that keep users, especially children, scrolling for hours.

The urgency is driven by hard data. Research from the early 2010s onward has documented troubling links between heavy social media use among young people and rising rates of anxiety, depression, bullying, and eating disorders. In 2021, Facebook whistleblower Frances Haugen leaked internal company documents showing that Instagram’s own researchers had found the platform negatively affected the mental health and well being of teenage girls. Those revelations helped catalyze legislative action, including the introduction of KOSA on February 16, 2022, by Senators Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN). Meanwhile, Canada’s Bill C-63 received its first reading during the 44th Parliament in early 2024.

This article compares these major initiatives—KOSA in the U.S., Canada’s Online Harms Act, and the UK Online Safety Act 2023—explaining their main provisions and exploring the ongoing debates about how to protect users without chilling lawful expression or enabling government overreach.

What Is “Online Harms” Legislation?

Online harms legislation refers to laws designed to reduce the negative effects of digital platforms on users, particularly children. In practical terms, these laws target two broad categories of harm:

  • Illegal content: Child sexual abuse material, hate propaganda, incites violent extremism, non-consensual intimate imagery, and other acts clearly prohibited by criminal law.

  • Lawful but harmful content: Material that may not be strictly illegal but contributes to serious harms—such as content promoting suicide, eating disorders, substance use disorders, or bullying and harassment.

Beyond content, these laws also address platform design. Features like infinite scrolling, autoplay videos, algorithmic recommendation systems, and constant push notifications are increasingly recognized as tools that exploit psychological vulnerabilities, especially in young users.

Online harms legislation typically works by:

Policy Tool

Description

Duty of care

Requires platforms to assess and mitigate foreseeable risks to users

Mandatory reporting

Obliges platforms and internet service providers to report internet child pornography and preserve data

Content removal obligations

Sets timelines for removing illegal content once identified

Privacy and parental tools

Mandates default high-privacy settings for minors and parental controls

Regulatory bodies

Creates or empowers agencies (like the UK’s Ofcom or Canada’s proposed Digital Safety Commission) to enforce compliance

These frameworks often amend existing statutes. In Canada, Bill C-63 proposed changes to the Criminal Code, the Canadian Human Rights Act, and the 2011 law governing mandatory reporting of internet child pornography. In the UK, the Online Safety Act 2023 empowers Ofcom to enforce duties on user-to-user services and search engines.

The distinction between illegal content and “lawful but harmful” content sits at the heart of free speech debates. Governments must decide what falls into each category, and critics worry that vague definitions will pressure platforms to over-remove legitimate speech.

The Kids Online Safety Act (KOSA) in the United States

The Kids Online Safety Act is a federal bill introduced in 2022 to address online harms affecting children and teens on large social media platforms. Its bipartisan sponsors—Senators Marsha Blackburn (R-KY) and Richard Blumenthal (D-CT)—framed it as a response to mounting evidence that platforms were harming young users’ mental health.

Legislative Trajectory

KOSA’s path through Congress illustrates both the momentum behind kids online safety legislation and the political complexities involved:

  • February 2022: Initial introduction following Frances Haugen’s testimony and document leaks

  • May 2024: Attempt to attach KOSA to FAA reauthorization legislation

  • July 2024: Combined with COPPA 2.0 and the Filter Bubble Transparency Act into the Kids Online Safety and Privacy Act (KOSPA, S.2073), which passed the Senate with an overwhelming 91-3 vote

  • December 2025: Representative Gus Bilirakis introduced the Kids Internet and Digital Safety (KIDS) Act, incorporating KOSA with added age verification mandates

  • March 2026: Advancement from the House Energy and Commerce Subcommittee

Despite bipartisan support from over 60 Senators, KOSA has faced ongoing House debates. A House Committee vote on a related kids’ safety package resulted in a 28-24 split, reflecting continued disagreement over the bill’s scope and potential impacts. Committee Republicans and Democrats have clashed over specific provisions, though both parties agree that something must be done to protect children online.

The bill focuses on platform design and recommendation systems rather than directly censoring individual pieces of content. It targets algorithms, default settings, autoplay features, and endless scroll functions—the architectural choices that keep users engaged but may amplify harmful content.

Core Obligations and Protections Under KOSA

KOSA would impose several significant obligations on covered platforms, which include social media, online gaming, virtual reality, messaging, and video streaming services. Email providers, ISPs, and educational institutions are exempt.

Safety by default design requirements:

  • Disable addictive features like infinite scroll, autoplay, and push notifications for users under 17

  • Enable opt-outs from personalized algorithmic recommendations

  • Restrict geolocation sharing, stranger messaging, and live features without parental consent or age confirmation

  • Set strong privacy defaults for minor users

Parental tools and oversight:

  • Dashboards allowing parents to monitor time spent on platforms

  • Access to account settings and privacy controls

  • Clear pathways to report harms such as bullying, grooming, sexual exploitation, or substance abuse promotion

Targeted harms the bill addresses:

Category

Examples

Mental health

Promotion of suicide, self harm, eating disorders

Physical safety

Sexual exploitation, grooming, predatory contact

Illegal products

Age-restricted ads (tobacco, gambling), substance abuse content

Algorithmic amplification

Systems that identify harmful content and push it to vulnerable users

Duty of care and accountability:

Platforms must exercise reasonable care to prevent harms to minors resulting from their services. This includes conducting risk assessments and implementing mitigation strategies. The bill creates accountability through:

  • Independent audits to verify platform safety claims

  • Researcher access to datasets for impact evaluation

  • Transparency requirements enabling regulators and the public to assess whether platforms are meeting their obligations

Enforcement structure:

The February 2024 amendments shifted enforcement priorities. The Federal Trade Commission would oversee definitions of harmful content and the duty of care, while state attorneys general would handle enforcement of minors’ safeguards, disclosures, and transparency requirements.

Support, Criticism, and Ongoing Debates Around KOSA

Support for KOSA:

KOSA has attracted endorsements from hundreds of organizations focused on child safety, mental health, and tech accountability. President Biden prioritized the legislation in his 2023 State of the Union address, arguing it would finally compel big tech companies to prioritize kids safety over engagement metrics.

Supporters argue that:

  • Platforms have consistently failed to self-regulate

  • Youth mental health crises demand legislative action

  • Design-focused requirements avoid direct censorship while addressing root causes

  • Bipartisan backing (91-3 Senate vote) demonstrates broad political consensus

Criticism from civil liberties groups:

Organizations like the Free Speech Coalition and Public Knowledge have raised concerns about the bill’s potential for unintended consequences:

  • Vague harm definitions: Critics argue that terms like “harm” could pressure platforms to over-remove content on LGBTQ+ issues, sexual health education, or controversial political speech—even without explicit takedown mandates

  • State AG enforcement: Allowing state attorneys general to enforce the law raises fears that politically motivated officials could target content they personally oppose

  • Age verification concerns: Although KOSA’s text avoids mandating blanket age verification, opponents worry platforms will implement invasive verification systems to comply with the duty of care

What KOSA does not do:

It’s important to note what the bill excludes:

  • It does not create a private right of action allowing individuals to sue over specific content

  • It does not directly mandate content takedowns

  • It does not require platforms to monitor private messaging features

Critics acknowledge these limitations but argue that indirect pressure—through design mandates and enforcement threats—could still leave kids in situations where platforms remove lawful content to avoid liability, potentially making parents worse off than they would be with targeted interventions.

The May 2023 revision addressed some concerns by explicitly listing covered harms (suicide promotion, eating disorders, substance abuse) rather than relying on open-ended definitions. But debates persist over whether the bill strikes the right balance between protection and expression.

Canada’s Online Harms Act (Bill C-63) and Related Measures

Canada’s approach to online harms legislation culminated in Bill C-63, the Online Harms Act, introduced in the 44th Parliament in February 2024. This comprehensive framework proposed to enact a new act respecting online harms while amending the Criminal Code, the Canadian Human Rights Act, and the 2011 law on mandatory reporting of internet child pornography.

The bill represented an ambitious attempt to address online harms through a unified regulatory framework. It proposed establishing a Digital Safety Commission with multiple offices handling compliance monitoring, safety standards, systemic risk assessments, complaints, and transparency.

Legislative context:

Bill C-63 built on earlier efforts:

  • Bill C-36 (2021): Focused on online hate speech with fines up to $50,000

  • Bill C-9 (Combatting Hate Act): Incorporated some online hate provisions from earlier bills

  • 2011 mandatory reporting law: Required reporting of internet child pornography by ISPs

Despite support from the governing Liberals and some safety advocates, the Online Harms Act provoked intense national debate about free expression, surveillance, and the scope of government power over digital spaces.

Key Provisions and Regulatory Architecture in Bill C-63

Types of harmful content targeted:

Content Category

Description

Hate propaganda

Content promoting hatred against identifiable groups

Violent extremism

Material that incites violent extremism or terrorism

Child sexual abuse material

Images and content depicting child exploitation

Intimate content communicated without consent

Revenge porn and non-consensual intimate images

Bullying and harassment

Targeted abuse, especially of minors

Self-harm and suicide content

Material encouraging self-harm or suicide

Platform duties:

The bill would require platforms to:

  • Conduct risk assessments for harmful content

  • Implement reasonable policies to reduce harm

  • Deploy age-appropriate design features

  • Prevent or restrict access to certain categories of harmful content, especially for children online

  • Enable users to report harms through accessible mechanisms

Regulatory architecture:

The proposed Digital Safety Commission would serve as Canada’s primary regulator for online safety, with several key functions:

  • Digital Safety Office: Monitoring platform compliance

  • Standards Office: Setting enforceable safety standards

  • Systemic Risk Office: Conducting broader assessments of platform impacts

  • Complaints Office: Handling user complaints about platform failures

  • Transparency Office: Overseeing reporting and public disclosure requirements

Consequential and related amendments:

Bill C-63 proposed significant changes to existing laws:

  • Criminal Code amendments: Refined definitions of “hatred,” updated hate speech offences, and adjusted sentencing (including potential life imprisonment in extreme cases)

  • Canadian Human Rights amendments: Enabled online hate complaints through the human resources of the Canadian Human Rights Commission

  • Mandatory reporting updates: Maintained and strengthened the 2011 law requiring internet service providers to notify authorities of child pornography and preserve data for 21 days

The bill also aimed to facilitate interprovincial coordination on enforcement and establish clear standards for international online communication involving Canadian users.

Support, Backlash, and Status of the Online Harms Act

Support for Bill C-63:

Frances Haugen, whose Facebook revelations helped spark global online harms debates, endorsed the Canadian approach. Some legal scholars argued the bill struck a needed balance between free expression and protection from serious harms, noting that it focused on systemic risks rather than individual speech.

Supporters emphasized:

  • The significant risk posed by unregulated platform amplification of harmful content

  • The need for a dedicated regulator with real enforcement powers

  • Protections for children that go beyond voluntary industry commitments

  • Alignment with international trends (EU Digital Services Act, UK Online Safety Act)

Criticism and backlash:

Bill C-63 faced fierce opposition from civil liberties groups and prominent public figures:

  • Canadian Civil Liberties Association: Warned of overly broad powers that could enable “thoughtcrime” prosecutions

  • Margaret Atwood: The celebrated author criticized provisions she saw as threatening free expression

  • Conservative critics: Argued the bill granted government excessive surveillance and censorship powers

Specific concerns included:

Issue

Concern

Severe penalties

Potential life imprisonment for extreme hate-related offences

Pre-emptive measures

Provisions allowing house arrest before any crime committed

Impact on minority speech

Fears that discussions about sexuality, identity, or controversial topics could be chilled

Prohibited ground expansion

Concerns about how “prohibited ground” definitions might affect legitimate debate

Status:

Political delays, sustained criticism, and debates over whether to split the framework into separate partisan bills for child protection and hate speech ultimately stalled the legislation. Bill C-63 died on the order paper in January 2025 without passage, joining the first session of the new Parliament without becoming law.

The failure has not ended Canada’s efforts. Successor initiatives pursuing narrower goals are expected, including separate bills addressing:

  • Child protection and kids safety online

  • Hate speech and extremism

  • AI-specific regulation

  • Financial transactions related to online exploitation

The UK Online Safety Act 2023 as a Global Reference Point

The UK Online Safety Act 2023 stands as one of the most influential and comprehensive online harms regimes globally, providing a model that Canadian and U.S. lawmakers have studied closely.

Scope and coverage:

The Act applies to:

  • User-to-user services (social media service providers, forums, messaging apps)

  • Search engines

  • App stores and regulated service providers hosting user-generated content

Key requirements:

Requirement

Description

Risk assessments

Platforms must evaluate risks of illegal and harmful content

Content moderation

Systems to identify and remove illegal content promptly

Child protections

Specific duties regarding self-harm, suicide, and eating disorder content

Transparency reports

Regular public reporting on safety measures and outcomes

Design codes

Standards for parental controls and default safety settings

Regulatory enforcement:

Ofcom, the UK’s communications regulator, enforces the Online Safety Act with powers to:

  • Investigate platform compliance

  • Issue codes of practice

  • Impose substantial fines for violations

  • Require changes to platform design and moderation systems

Limitations and criticism:

While often cited as a template, the UK Act has faced criticism:

  • Over-removal risks: Platforms may remove lawful content to avoid regulatory scrutiny

  • Burden on smaller platforms: Compliance costs may disadvantage smaller services

  • Privacy concerns: Debates over whether the law could lead to scanning private messages (though proactive monitoring of private messaging is not mandated)

The UK’s approach emphasizes systemic risk management over individual content control, a principle that has influenced the design-focused provisions in KOSA and similar legislation elsewhere.

Key Policy Themes: Safety, Privacy, and Free Expression

Online harms legislation must navigate fundamental tensions between competing values. Understanding these trade-offs is essential for evaluating any regulatory framework.

The Central Tension

Every online harms bill attempts to protect users—especially children and marginalized groups—from serious harms while avoiding the creation of a censorship apparatus. This is easier said than done.

Design-focused approaches:

Laws like KOSA and Bill C-63 attempt to focus on platform design and systemic risks rather than direct government control over individual posts:

  • Targeting algorithms and recommendation systems

  • Mandating default privacy settings

  • Requiring parental tools and age-appropriate features

  • Imposing duties to assess and mitigate risks

This approach avoids giving government officials the power to demand specific content takedowns. However, critics argue it still creates indirect pressure to remove lawful content. Platforms facing potential enforcement action may err on the side of caution, removing content that might be harmful rather than risk liability.

Privacy Trade-offs

Age assurance technologies present a significant challenge:

Approach

Benefit

Risk

Age verification

Enables targeted protections for minors

Requires personal data collection, undermines anonymity

Age estimation

Less invasive than verification

May be inaccurate, still collects biometric data

Parental controls

Respects family autonomy

Relies on parental engagement, may leave kids without protections

Lawmakers have deliberately avoided mandating proactive monitoring of private messaging or AI chatbot interactions in most proposals, recognizing that such surveillance would fundamentally alter the nature of private communication online.

Public vs. Private Communications

A key distinction in these laws is between:

  • Public social media feeds: Subject to content moderation, algorithmic oversight, and transparency requirements

  • Private messaging feature interactions: Generally excluded or limited in coverage

  • AI chatbots and one-to-one services: Often outside the scope of laws designed for traditional social media platforms

The Canadian Online Harms framework and similar regimes deliberately exclude or limit coverage of private messaging, acknowledging both the privacy concerns and the practical challenges of regulating ephemeral, one-to-one communications.

Other Users and Platform Accountability

These laws also address how platforms handle interactions between other users:

  • Reporting mechanisms for harassment, bullying, and predatory behavior

  • Requirements to respond to user complaints within specified timeframes

  • Obligations to provide transparency about enforcement decisions

  • Researcher access to data enabling external evaluation of platform impacts

Future Directions for Online Harms Regulation

The regulatory landscape is evolving rapidly. Several key trends will shape online harms legislation in the coming years.

United States: Refining KOSA

Continued negotiations over KOSA focus on:

  • Duty of care language: Clarifying what constitutes reasonable care without creating impossible standards

  • Scope of covered services: Determining which platforms are large enough to warrant regulation

  • Civil rights safeguards: Ensuring enforcement doesn’t disproportionately affect LGBTQ+ youth, sexual health information, or minority speech

  • Age verification alternatives: Finding approaches that protect minors without requiring invasive data collection

The Senate version’s overwhelming passage suggests eventual compromise is possible, but House debates and competing priorities (including AI regulation) may delay final action.

Canada: Pivot to Granular Approaches

The failure of Bill C-63 has not ended Canada’s commitment to online harms regulation. Instead, it has prompted a shift toward more targeted legislation:

  • Separate child protection bills: Focused specifically on protecting children from exploitation and predatory content

  • Standalone hate speech legislation: Building on elements of the Combatting Hate Act

  • AI-specific regulation: Addressing harms from generative AI, large language models, and chatbots

  • Platform accountability measures: Requiring companies to take reasonable steps to identify harmful content and respond appropriately

The opening remarks in parliamentary debates increasingly reference the need for approaches that can survive both political opposition and constitutional scrutiny.

The Challenge of Generative AI

Many existing online harms statutes were drafted with social media in mind. They may not fit:

  • One-to-one AI chatbot interactions

  • Ephemeral or highly personalized content

  • Generative AI systems that create harmful content on demand

  • Artificial intelligence applications that don’t fit the “platform hosting user content” model

Lawmakers are beginning to grapple with these challenges, but regulatory frameworks lag behind technological development. The primary purpose of many bills remains addressing traditional social media, leaving significant gaps in coverage.

International Trends

Global coordination is increasing through:

  • G7 discussions: Shared principles on platform accountability

  • EU Digital Services Act influences: Creating pressure for interoperable standards

  • Researcher access provisions: Enabling cross-border evaluation of platform impacts

  • Transparency requirements: Moving toward common reporting frameworks

These developments suggest the possibility of more harmonized international approaches, though national differences in free expression traditions and political systems will continue to create variation.

The Path Forward

Online harms legislation will continue to evolve as new technologies and harms emerge. The bills discussed in this article—KOSA, Canada’s Online Harms Act, and the UK Online Safety Act—represent early attempts to grapple with challenges that will only grow more complex.

Any durable framework will need to:

  • Balance safety with innovation

  • Protect fundamental rights while addressing real harms

  • Adapt to emerging technologies

  • Enable meaningful accountability without creating censorship regimes

  • Require platforms to take reasonable care without dictating specific content decisions

For parents, educators, advocates, and policymakers, understanding these frameworks now is essential. The debates over how to require companies to address online harms while preserving the open internet will shape digital life for generations to come.

Whether you’re a concerned parent trying to protect your children, a platform operator seeking to understand compliance obligations, or a citizen engaged in democratic deliberation, staying informed about online harms legislation is no longer optional—it’s essential.


Connexion

Vous avez oublié votre mot de passe ?

Vous n'avez pas encore de compte ?
Créer un compte