Intermediary liability: rules, risks and reforms in the digital age
- , by Paul Waite
- 26 min reading time
When a user posts defamatory content on Facebook, sells counterfeit goods on Amazon, or shares illegal material on YouTube, who bears the legal responsibility? This question sits at the heart of intermediary liability—the legal framework that determines when internet intermediaries like ISPs, hosting providers, and online platforms can be held accountable for content created by their users rather than by themselves.
The stakes have never been higher. Since the mid-2000s, social media has transformed how billions of people communicate, consume information, and conduct commerce. Platforms like Facebook, YouTube, TikTok, and Amazon Marketplace have become dominant forces in the digital economy, processing staggering volumes of user generated content every second. The regulatory response has intensified accordingly, with governments around the world introducing new intermediary liability rules since approximately 2018 to address the scale of online harms these services can facilitate or amplify.
The central tension in this field is deceptively simple but practically complex: how do we protect freedom of expression and maintain open access to the internet while simultaneously addressing genuine harms? These harms range from hate speech and terrorist propaganda to child sexual abuse material, defamation, the sale of counterfeit or dangerous products, and serious privacy violations like the non-consensual sharing of intimate images. Getting the balance wrong in either direction carries serious consequences—too much liability and platforms over-censor lawful speech, too little and harmful content proliferates unchecked.
This article surveys the key legal models governing intermediary liability, examines major regional frameworks including the EU’s Digital Services Act, India’s IT Act regime, Latin American approaches, and the United States’ Section 230, and explores current policy debates shaping the future of content regulation. Along the way, concrete examples, specific dates, and landmark court decisions will illustrate how these abstract legal principles operate in practice.
Core concepts: what counts as an “intermediary” and what is “liability”?
Understanding intermediary liability requires first clarifying who qualifies as an intermediary and what types of legal responsibility they might face. At its simplest, an intermediary is any service that facilitates communication or transactions between other parties without being the primary source of the content or goods involved. But within this broad category, different types of intermediaries perform very different functions—and regulatory frameworks typically treat them differently as a result.
The most basic category is the “mere conduit” provider, which simply transmits data from one point to another without storing or modifying it. Classic examples include ISP access providers like Deutsche Telekom in Germany or AT&T in the United States, which provide the infrastructure for users to connect to the internet. Caching services occupy a slightly more involved role, temporarily storing content to improve delivery speed—Cloudflare’s content delivery network is a prominent example. Hosting providers like AWS, Google Cloud, and traditional web hosting companies store content on their servers at the direction of users, making that content accessible to others.
The category generating the most regulatory attention today is online platforms that not only host but actively organize and disseminate third party content. Facebook, X (formerly Twitter), YouTube, TikTok, Amazon Marketplace, and Southeast Asian e-commerce giants like Shopee all fall into this category. These platforms do far more than passively store content—they curate, recommend, and amplify it through algorithms, creating a more complex relationship with the material users post.
The types of liability intermediaries may face are equally varied. Civil liability typically arises in defamation or intellectual property cases, where injured parties seek monetary damages. Criminal liability may attach for particularly serious offenses, such as when a provider knowingly fails to remove child sexual abuse material. Administrative liability has become increasingly important under modern regulatory frameworks, with authorities like the European Commission empowered to impose significant fines for systemic failures under the Digital Services Act.
A critical distinction runs through all these categories: the difference between primary liability for one’s own conduct and secondary or intermediary liability for the actions of users. Platform providers are rarely the original source of problematic content—they become liable because of their relationship to that content. The legal triggers for such content typically include actual knowledge (the provider definitively knows illegal content exists), constructive knowledge (the provider should reasonably have known), editorial control (the provider exercises meaningful discretion over what appears), and authority over how content is presented or monetized. Different jurisdictions weight these factors differently, creating the patchwork of approaches that service providers must navigate today.
Access to the Internet and freedom of expression
Protecting internet intermediaries from excessive liability is not merely about protecting corporate interests—it is fundamentally about preserving the conditions for free expression and universal internet access. If intermediaries faced strict liability for everything their users posted, the rational response would be aggressive pre-screening and removal of anything potentially problematic, with lawful but edgy speech becoming collateral damage.
This insight drove the creation of early liability exemption frameworks. The European Union’s e-Commerce Directive, adopted in 2000, established safe harbor protections for intermediaries that act as passive conduits or respond promptly to notifications about illegal content. The United States went even further with Section 230 of the Communications Decency Act, enacted in 1996, which provides broad immunity for platforms hosting third party content. Both frameworks recognized that requiring intermediaries to pre-screen all content would be technically impractical and legally chilling. The 2011 report by the UN Special Rapporteur on Freedom of Expression specifically praised such approaches for mitigating the risk of over-censorship while still allowing targeted action against clearly illegal material.
When liability rules are overly broad or punitive, the consequences extend beyond individual content decisions. Network shutdowns have become an increasingly common response in some jurisdictions, with authorities ordering complete platform blocks or mobile internet suspensions to control the spread of content deemed harmful. Between 2019 and 2022, countries including India and Myanmar imposed significant internet restrictions justified partly on grounds of controlling harmful content, with devastating effects on legitimate communication. Even short of complete shutdowns, platforms facing serious legal risk in particular jurisdictions may geo-block content or entire services rather than expose themselves to liability, effectively fragmenting the global internet.
Best-practice principles have emerged from international soft law to guide this balance. The UN Special Rapporteur’s 2011 and 2018 reports emphasized that intermediaries should face no general obligation to monitor all content proactively. Liability should typically attach only after court orders or other due process mechanisms establish that specific content is unlawful. Transparency about content actions and robust appeal mechanisms for users whose content is removed are essential safeguards. These principles, while not legally binding, have influenced the design of frameworks like the Digital Services Act and serve as benchmarks for evaluating national approaches.
Global jurisprudence and legal models of intermediary liability
Courts and legislatures around the world have developed markedly different approaches to intermediary liability, ranging along a continuum from strict liability at one extreme to near-complete immunity at the other. Understanding this spectrum is essential for grasping why platforms behave differently across jurisdictions and why content that remains accessible in one country may be rapidly removed in another.
At the strict liability end, some proposals would require platforms to deploy upload filters or other proactive measures to detect and block illegal content before it ever appears. This approach treats intermediaries almost as publishers, responsible for everything that passes through their systems. The practical effect is to push platforms toward aggressive automated filtering, with inevitable false positives affecting lawful content. Moving along the continuum, fault-based liability holds intermediaries responsible when they have been negligent in responding to problems—for example, when a hosting provider unreasonably delays acting on a valid complaint. Knowledge-based liability, the model underlying both the EU’s e-Commerce Directive and the DSA, triggers responsibility only after the intermediary obtains actual knowledge of illegal content, typically through a notice from a user or authority, and fails to act expeditiously.
The court-adjudicated model places even greater emphasis on due process, requiring judicial orders before intermediaries must act. Brazil’s Marco Civil da Internet, enacted as Law No. 12.965 in 2014, exemplifies this approach for many categories of content, requiring court orders rather than mere private notices to trigger platform liability for most material other than non-consensual intimate images. At the far end of the spectrum, the broad immunity model exemplified by US Section 230 provides sweeping protection for platforms hosting third party content, with liability attaching only in narrow statutory exceptions for areas like federal criminal law, intellectual property, and certain privacy violations.
Landmark court decisions have shaped how these models operate in practice. In Argentina, the Supreme Court addressed search engine liability for linking to defamatory content, establishing parameters for when mere indexing creates responsibility. India’s Supreme Court decision in Shreya Singhal v. Union of India in 2015 significantly narrowed the scope of intermediary liability under the IT Act by clarifying that actual knowledge requires a court order or government notification, not merely user complaints. The European Court of Justice’s 2014 decision in Google Spain v. AEPD established the “right to be forgotten” in search results, requiring search engines to delist certain personal information upon request—a ruling with profound implications for how intermediaries balance access to information against individual privacy. Each of these decisions illustrates how courts interpret and refine the statutory frameworks legislators create.
Regional snapshots: Europe, India, Latin America, Southeast Asia
The European Union’s approach to intermediary liability has evolved significantly over two decades. The e-Commerce Directive established foundational safe harbor principles in 2000, but a series of Court of Justice decisions progressively clarified and sometimes expanded platform responsibilities. In L’Oréal v. eBay in 2011, the Court held that while eBay enjoyed hosting protections, it could be required to take measures preventing future infringements once notified of specific violations. The 2014 Google Spain decision created new obligations for search engines regarding personal data. These rulings set the stage for the comprehensive reform represented by the Digital Services Act, which entered into force in November 2022 and became fully applicable across the EU from 17 February 2024. The DSA preserves core safe harbor principles while layering significant new transparency and due diligence obligations, particularly for the largest platforms.
India’s intermediary liability regime combines statutory provisions with influential judicial interpretation. Section 79 of the Information Technology Act provides conditional immunity for intermediaries, but the scope of that protection has been shaped by court decisions. The Supreme Court’s 2015 Shreya Singhal ruling struck down the vague Section 66A criminalizing offensive online content and clarified that intermediary liability under Section 79 requires actual knowledge through specific mechanisms—primarily court orders or government notifications—rather than mere user complaints. The 2021 IT Rules introduced additional obligations including expedited takedown timelines, requirements to identify first originators of certain messages, and grievance officer appointments, though these rules have faced ongoing legal challenges regarding their compatibility with fundamental rights.
Latin American approaches vary but often emphasize judicial involvement in content decisions. Argentina’s courts have addressed search engine liability extensively, with the Supreme Court establishing that search engines are not generally liable for third-party content but may face responsibility in specific circumstances involving actual knowledge and failure to act. Brazil’s Marco Civil da Internet requires court orders for most content takedowns, reflecting a deliberate choice to place judges rather than private platforms in the position of determining what speech is unlawful—though expedited procedures exist for non-consensual intimate images and certain other high-harm categories.
Southeast Asia presents a more fragmented picture, with some jurisdictions pursuing aggressive approaches to platform accountability that raise concerns about speech impacts. In 2023, Malaysia’s communications regulator MCMC threatened action against Meta over what authorities characterized as harmful content, illustrating how government pressure on intermediaries can create chilling effects on legitimate expression. Similar dynamics have played out across the region, with varying degrees of procedural protection for platforms and users. These regional differences contribute to an emerging global “toolbox” of regulatory approaches, with lawmakers increasingly drawing on comparative experience when designing or reforming their own frameworks.
Intermediary liability in the EU: from the e-Commerce Directive to the Digital Services Act (DSA)
The European Union’s journey from the e-Commerce Directive to the Digital Services Act represents one of the most significant evolutions in intermediary liability law anywhere in the world. When the e-Commerce Directive was adopted in June 2000, the commercial internet was still relatively young—Facebook would not be founded for another four years, YouTube for five, and the smartphone revolution was nearly a decade away. The Directive established a workable framework for its era, creating safe harbors for mere conduit transmission, caching, and hosting services that would remain largely unchanged for two decades.
By the late 2010s, it had become clear that the 2000 framework was insufficient for the platform economy that had emerged. Social media giants processed billions of posts daily, online marketplaces connected millions of sellers with consumers across borders, and the scale of both beneficial and harmful content had exploded beyond anything the original legislators anticipated. The European Commission proposed the Digital Services Act in December 2020, initiating an intensive legislative process that would reshape platform regulation for the coming decade.
The DSA’s journey through EU institutions was relatively swift by Brussels standards. Political agreement was reached in April 2022, the European Parliament formally adopted the regulation on 5 July 2022, and the Council completed the legislative process later that year. The regulation entered into force in November 2022, with Very Large Online Platforms and Very Large Online Search Engines—those with over 45 million monthly active users in the EU—facing compliance obligations from late 2023. The full regulation became applicable across all covered services from 17 February 2024.
Crucially, the DSA does not abandon the safe harbor model established by the e-Commerce Directive. Rather, it preserves core liability protections while adding tiered obligations that increase with a service’s size, reach, and potential for societal impact. For platforms, this means that the basic bargain remains: act responsibly on notice of unlawful content, and you retain your liability exemption. But “acting responsibly” now encompasses a far more detailed set of procedural and transparency requirements than existed under the 2000 framework.
Key DSA liability and safe-harbor provisions
The DSA maintains the three-tier safe harbor structure familiar from the e-Commerce Directive, providing important continuity for service providers. Mere conduit services—those that simply transmit information or provide access to a communication network—retain immunity under Article 4 for content they neither initiate, select the receiver of, nor modify. Caching services, which temporarily and automatically store information to make onward transmission more efficient, enjoy protection under Article 5 provided they comply with conditions regarding access and content accuracy. Hosting providers, including the platforms that have become central to online communication, are protected under Article 6 when they lack actual knowledge of illegal content or act expeditiously to remove or disable access to such content upon obtaining knowledge.
The DSA importantly clarifies the actual knowledge standard that triggers hosting provider responsibility. Article 6 specifies that general awareness that illegal activity or content exists on a platform does not by itself constitute actual knowledge of specific items requiring action. This prevents the argument that platforms lose safe harbor protection simply because everyone knows some illegal content exists somewhere on any large service. Complementing this, Article 8 confirms that intermediaries have no general obligation to monitor the content they transmit or store, nor any duty to actively seek facts or circumstances indicating illegal activity—a crucial protection against surveillance mandates that would transform the nature of online services.
The regulation introduces what might be called a good samaritan provision addressing a long-standing concern in intermediary liability law. Platforms have historically worried that voluntary efforts to detect and remove illegal content could be characterized as editorial involvement, potentially undermining their safe harbor protection. The DSA addresses this by clarifying that good faith efforts to identify, investigate, and remove illegal content do not automatically result in losing liability protections. This encourages responsible content moderation without penalizing platforms for making best efforts to keep their services clean.
The DSA also addresses online marketplaces specifically, recognizing that these platforms occupy a complex position between pure hosting and direct commercial involvement. When platforms like Amazon Marketplace or AliExpress present themselves as the trader, control key elements of the transaction such as pricing or delivery, or otherwise give average consumers the impression they are dealing with the platform rather than a third-party seller, they may lose the benefit of limited liability and face treatment more akin to sellers themselves. This provision responds to consumer protection concerns about counterfeit and unsafe products reaching European consumers through marketplace platforms.
New obligations for platforms, VLOPs, and VLOSEs
Beyond preserving safe harbors, the DSA creates an unprecedented set of affirmative obligations for online platforms, with requirements escalating based on the service’s size and potential impact. All platforms must now implement user-friendly mechanisms for reporting illegal content, making it easy for users to report illegal content they encounter. When platforms remove content or restrict accounts, they must provide clear statements of reasons explaining the basis for their decisions. Regular transparency reports must detail content moderation activities, including the use of automated tools, complaint volumes, and decision outcomes.
For Very Large Online Platforms and Very Large Online Search Engines—those designated by the European Commission as having at least 45 million monthly active users in the EU—the obligations become substantially more demanding. The Commission designated the first tranche of VLOPs and VLOSEs in 2023, including services like Facebook, YouTube, TikTok, Amazon, Google Search, and Bing. These services must conduct comprehensive systemic risk assessments examining how their design, algorithms, and moderation practices may affect issues including disinformation, fundamental rights, electoral integrity, gender-based violence, and the protection of minors.
Risk assessments must lead to reasonable and proportionate mitigation measures, which could include changes to recommendation algorithms, advertising systems, interface design, or terms of service. VLOPs and VLOSEs must submit to independent annual audits verifying their compliance and make audit reports publicly available. They must also provide meaningful transparency about their recommender systems—the algorithms that determine what content users see—and grant access to relevant data for vetted researchers and regulators investigating systemic issues.
Enforcement of these obligations rests with national Digital Services Coordinators in each member state, with the European Commission exercising direct supervisory power over VLOPs and VLOSEs. The stakes are significant: platforms can face fines of up to six percent of global annual turnover for serious infringements, with periodic penalty payments available for ongoing non-compliance. The regulation also creates user rights, including the ability to seek compensation for damage caused by platform infringements—an important accountability mechanism extending beyond regulatory enforcement. Major technology companies began adapting their compliance programs throughout 2022 and 2023, launching new transparency centers, updating terms of service, and implementing controls for recommender systems to meet DSA timelines.
Non-consensual dissemination of intimate images and other high-harm content
Not all online harms are created equal, and regulatory frameworks increasingly recognize that certain categories of content require exceptional treatment. Non-consensual dissemination of intimate images—commonly known as NCII or “revenge porn”—exemplifies content that courts and legislators treat as requiring rapid, effective remedies due to the severe and often irreversible harm it causes to victims. Privacy, dignity, psychological wellbeing, and physical safety may all be implicated when intimate images are shared without consent, often as a weapon of harassment, coercion, or retaliation.
Courts across jurisdictions have developed expedited procedures for addressing NCII precisely because traditional notice and takedown timelines may be inadequate. In North America, Europe, and Asia, judges regularly order search engines and hosting services to delist or disable access to such content within hours or days of receiving complaints. These orders frequently precede full adversarial hearings, reflecting a judicial assessment that the harm from delay outweighs the procedural costs of acting on incomplete information. Similar urgency characterizes responses to child sexual abuse material, direct violent threats, and clear incitement to imminent violence.
This exceptional treatment creates inevitable tensions with the broader intermediary liability framework. Victims understandably demand swift and comprehensive action, including removal from all platforms where content may have spread and delisting from search results. But overly broad or vaguely worded orders can push platforms toward excessive removal, potentially affecting newsworthy reporting, artistic expression, or content that superficially resembles but is not actually the reported material. The challenge is crafting remedies that are effective without becoming tools for censorship or harassment of critics.
Modern frameworks address NCII through several mechanisms that balance speed with appropriate safeguards. Notice and takedown systems often include priority channels for intimate image reports, with expedited review processes and specialized staff. Trusted flagger programs allow verified organizations with expertise in image-based abuse to escalate reports for faster action. Law enforcement protocols enable coordination with police in cases involving criminal conduct. Perhaps most significantly, major platforms have adopted hashing technologies that create digital fingerprints of known NCII, allowing automated detection and prevention of re-uploads—though implementation must respect data protection standards and avoid false positives.
For NCII, child sexual abuse material, and similarly severe harms, many advocates and policymakers accept narrower safe harbors and faster mandated response times than would be appropriate for general complaints about offensive or merely distasteful content. However, even in these exceptional categories, best practice maintains requirements for judicial oversight of contentious orders, transparency logs documenting actions taken, and meaningful remedies for individuals whose content is incorrectly removed.
Intermediary liability & content regulation: recent policy trends and debates
Intermediary liability has moved from a relatively technical corner of internet law to the center of global debates about digital governance. The shift accelerated around 2018, driven by high-profile scandals involving platform amplification of disinformation, evidence of foreign interference in elections, concerns about online harassment and its effects on public discourse, and growing unease about the concentration of power in a small number of technology companies. Governments that had largely left platforms to self-regulate began pursuing more assertive regulatory strategies.
Contemporary reforms pursue several interrelated policy goals. They seek to increase platform responsibility for illegal content and for systemic risks their services may create or exacerbate. They aim to preserve space for lawful expression and continued innovation in digital services. And they attempt to ensure that content decisions affecting users are subject to transparency, due process, and effective remedies. Balancing these objectives has proven challenging, with different jurisdictions striking different compromises based on their legal traditions, political circumstances, and assessments of the most pressing harms.
The regulatory landscape has become notably crowded. The EU’s Digital Services Act represents the most comprehensive effort to date, but it sits alongside related instruments including the Digital Markets Act addressing platform competition, the Data Governance Act, NIS2 cybersecurity requirements, and the emerging AI Act. The United Kingdom enacted its Online Safety Act, successor to the Draft Online Safety Bill introduced in May 2022, establishing a duty of care framework with Ofcom developing detailed codes of practice. In the United States, Section 230 faces ongoing Congressional scrutiny, with proposals from both parties to modify its protections, though no reform has yet achieved sufficient consensus for passage. Regional initiatives include draft ASEAN Guidelines on the Governance of Digital Platforms and recommendations from organizations like the Global Network Initiative emphasizing rights-respecting approaches to platform accountability.
A notable trend across these frameworks is the shift from purely reactive, content-by-content decisions toward systemic governance. Where earlier intermediary liability rules focused primarily on whether a platform responded appropriately to individual takedown requests, newer approaches like the DSA require risk assessments, algorithmic transparency, researcher access, and independent audits examining how platform design choices affect society at scale. This does not replace the traditional safe harbor framework—expeditious action on illegal content remains central—but adds a governance layer addressing the roots of harm rather than only its symptoms.
Regulatory “toolbox”: scope, knowledge, and notice-and-action design
Policymakers designing intermediary liability regimes have numerous adjustable parameters at their disposal, a kind of regulatory toolbox that can produce very different outcomes depending on how the elements are configured. Understanding these design choices helps explain why platforms face such different obligations across jurisdictions and why debates about seemingly technical legal provisions can have major implications for online speech.
Scope determines which services fall within a regulatory framework. Some regimes cover only traditional hosting and access providers, while others extend to search engines, social media platforms, messaging services, app stores, and online marketplaces. The DSA, for example, applies across a wide range of information society services but imposes its most demanding obligations only on very large platforms. Scope decisions also determine whether curating, ranking, or algorithmically recommending content changes a provider’s liability status. Under US Section 230, platforms generally retain immunity even when they actively curate or recommend third-party speech. Under the DSA, by contrast, a platform that integrates too deeply into transactions may forfeit its hosting defence and face treatment as a direct participant.
The knowledge standard defines what triggers a platform’s obligation to act. Some regimes accept any user notice as creating actual knowledge, potentially enabling abuse by bad-faith complainants seeking to suppress legitimate content. Others require formal court orders or government notifications, providing greater due process protection but potentially slowing responses to genuine harms. The EU approach distinguishes between general awareness that illegal activity exists—which does not defeat safe harbor—and actual knowledge of specific illegal content requiring action.
Notice and action systems determine procedural requirements for complaints and responses. Key variables include required information in notices, response deadlines (some frameworks demand action within 24 or 36 hours for serious content), counter-notice procedures allowing affected users to challenge removals, and complaint mechanisms for users dissatisfied with platform decisions. Well-designed systems include safeguards against misuse, such as penalties for manifestly unfounded notices submitted to harass critics or suppress competition.
The consequences of poorly calibrated design choices can be severe. Vague standards for what constitutes illegal content, combined with short response deadlines and significant liability exposure, predictably lead to over-removal as platforms err on the side of caution. Conversely, excessive procedural requirements before any action can be taken may leave genuine victims without effective remedies. The challenge is designing frameworks that enable rapid response to clear illegality while preserving space for borderline, contested, or ultimately lawful expression.
Future directions: balancing innovation, rights, and enforcement
The intermediary liability frameworks established over the past several years will face significant tests as technology continues to evolve. Emerging technologies create novel challenges that existing rules address imperfectly if at all, and regulators must balance the need for updating legal frameworks against the risks of stifling beneficial innovation or creating surveillance infrastructure incompatible with human rights.
Generative artificial intelligence, which became a mainstream phenomenon around 2022 with systems like ChatGPT and image generators, raises particularly difficult questions. When AI systems synthesize content that is defamatory, infringes copyright, or constitutes illegal material, traditional intermediary liability analysis struggles with identifying the responsible party. Is the AI developer liable, the platform hosting the AI service, the user who prompted the output, or some combination? Existing frameworks generally assume a human creator of content, an assumption that generative AI disrupts. Regulators are beginning to address these questions—the EU AI Act includes provisions relevant to generative systems—but coherent approaches remain in development.
Encrypted messaging services present a different challenge. Strong encryption protects privacy, secures communications against criminals and authoritarian governments, and enables journalists and activists to work safely. But the same properties that protect legitimate users also shield illegal activity from detection. Proposals for client-side scanning—examining content on user devices before encryption—have proven deeply controversial, with critics arguing they fundamentally undermine encryption’s security guarantees and create infrastructure susceptible to government abuse. Finding regulatory approaches that address genuine law enforcement needs without destroying privacy remains an unsolved problem.
Decentralized and federated services add yet another layer of complexity. Platforms like Mastodon operate across networks of independently administered servers, with no single central operator who might be held responsible for content. Traditional intermediary liability assumes an identifiable intermediary to whom obligations attach; federated architectures distribute that role across many parties, each potentially in different jurisdictions with different legal requirements. Whether existing regulatory frameworks can effectively address these services, or whether new approaches are needed, remains an open question.
Despite these challenges, certain best-practice directions seem clear. Safe harbor frameworks have proven essential to a functioning internet and should be maintained and refined rather than abandoned. Transparency, accountability, and user empowerment—including meaningful appeal rights and explanation of content decisions—serve both speech interests and platform legitimacy. Investment in independent research access and systematic impact assessment offers more sustainable approaches than purely reactive takedown regimes. Intermediary liability will remain a core element of digital governance for the foreseeable future, and coherent, rights-respecting frameworks are essential for maintaining a healthy, open internet that serves the world’s diverse users.