Telecom Policy and Content Moderation: Navigating Digital Risk
- , par Paul Waite
- 19 min temps de lecture
Executive summary: telecom policy, content moderation and rising digital risk
The convergence of telecom policy and content moderation accelerated dramatically around 2018, when regulators worldwide began recognising that traditional carrier-focused rules were insufficient for the services telecom operators increasingly provide. The European Union’s Digital Services Act (DSA), adopted in 2022 and fully applicable since February 2024, now imposes transparency, systemic risk assessment, and takedown obligations on a wide range of intermediary services. The United Kingdom’s Online Safety Act 2023 extends similar duties to messaging, gaming, and user-to-user services that mobile operators bundle or host. In Latin America, Brazil’s Bill 2630/2020 proposes structural reforms positioning CGI.br as a regulatory intermediary, directly implicating telecom-backed platforms and zero-rating arrangements.
What makes this moment distinctive is the overlap between “over-the-top” (OTT) services and traditional telecom infrastructure. A mobile operator running a messaging app, hosting a cloud marketplace, or distributing IPTV now faces obligations once reserved for social media platforms. These include duties around illegal content removal, harmful online content mitigation, child safety protections, and algorithmic transparency. The regulatory challenges are no longer about carriage alone but about gatekeeping, curation, and the systemic risks that flow from scale.
This article addresses three core themes. First, it examines the shift from pure connectivity to platform-style liability regimes, where telecoms must classify their services and map obligations accordingly. Second, it explores the rise of hybrid governance structures and regulatory intermediaries that change how compliance actually works on the ground. Third, it provides a practical framework for building content moderation and risk programmes that scale across jurisdictions. The discussion is written for in-house legal, policy, and risk teams at telecom and digital service providers operating across the EU, UK, US, and key emerging markets.
From telecom carriage rules to platform-style liability regimes
The concept of “common carriage” shaped telecom regulation for over a century. Under frameworks like the US Communications Act of 1934 and the EU’s successive telecoms packages, network operators bore obligations around lawful intercept, quality of service, and non-discriminatory access—but not responsibility for the content flowing through their pipes. The logic was straightforward: carriers transmitted, they did not curate.
That logic began eroding as telecom operators expanded into hosting, messaging, cloud services, and content delivery networks. The digital services act dsa, Regulation (EU) 2022/2065, marks the clearest regulatory acknowledgment of this shift. In force since November 2022 and fully applicable from 17 February 2024, the DSA moves beyond transmission obligations to regulate hosting services, caching providers, online platforms, and very large online platforms (VLOPs) and very large online search engines (VLOSEs). The regulation recognises that many internet service providers now perform functions indistinguishable from those of traditional platforms.
The DSA preserves the “mere conduit” safe harbour for genuine transmission services—inherited from Article 4 of the now-superseded e-Commerce Directive. But it layers new requirements for services that go beyond pure carriage. Hosting providers must implement notice-and-action mechanisms. Online platforms face transparency obligations around algorithmic systems and content moderation practices. VLOPs and VLOSEs, defined as platforms reaching 45 million monthly active users in the EU, must conduct annual risk assessments, submit to independent audits, and provide data access to researchers and regulators.
The practical consequence for telecom operators is significant. Compliance playbooks must distinguish between services that remain pure carriage and those qualifying as hosting, online marketplaces, or platforms. A telecom-run DNS resolver may fall under different rules than a telecom-run app store or cloud hosting service. Mapping each product to its regulatory category is now foundational compliance work.
Comparing traditional telecom duties with modern content obligations
|
Obligation Type |
Traditional Telecom Duties |
Modern Content-Related Duties |
|---|---|---|
|
Legal basis |
Communications Acts, EECC, net neutrality rules |
DSA, Online Safety Act, national platform laws |
|
Core focus |
Lawful intercept, quality of service, spectrum management |
Illegal content takedown, harmful content mitigation, transparency |
|
User protection |
Emergency services access, data privacy under telecoms rules |
Notice-and-action systems, appeal mechanisms, child safety |
|
Reporting |
Quality metrics, spectrum usage |
Transparency reports on content moderation, algorithmic audits |
|
Enforcement |
National telecoms regulators |
Digital Services Coordinators, Ofcom, cross-border cooperation |
Key global regimes shaping telecom content moderation
The evolving regulatory landscape for telecom content moderation varies substantially by jurisdiction. What follows is a structured overview of the major regimes that in-house teams must navigate, with specific laws, dates, and regulators identified for each.
European Union
The EU’s approach combines the Digital Services Act with the Digital Markets Act (DMA) and the existing European Electronic Communications Code (EECC). The DSA applies obligations in tiers: all intermediary services must meet baseline transparency rules; hosting services add notice-and-action duties; online platforms face further requirements around traceability of traders; and very large online platforms and search engines trigger systemic risk obligations.
The threshold for VLOP/VLOSE designation is 45 million monthly active users in the EU. Once designated, platforms must conduct annual risk assessments covering dissemination of illegal content, negative effects on fundamental rights, and risks to electoral processes. The European Commission requested information from designated platforms throughout 2023-2024, and national Digital Services Coordinators now enforce obligations across member states. The European Data Protection Board provides guidance where content moderation intersects with general data protection regulation requirements.
The DMA, meanwhile, addresses competition concerns for designated “gatekeepers,” with implications for telecom-owned app stores or messaging services that reach gatekeeper thresholds. The interaction between the two Acts means telecom operators may face both platform governance and digital markets obligations simultaneously.
United Kingdom
The Online Safety Act 2023 creates a comprehensive framework for user-to-user services and search engines, with Ofcom designated as the regulator. The Act applies to services enabling user generated content sharing, including messaging services, online gaming platforms, and social features embedded in telecom products.
Ofcom launched consultations on codes of practice throughout 2023-2025, covering illegal content duties, child safety obligations, and transparency reporting. The Act’s duties to protect children are particularly extensive, requiring platforms to use age-assurance measures and prevent access to harmful content. Telecom operators bundling messaging apps or providing family-oriented products must map their services against these requirements.
The encryption debates of 2023 remain live issues. The Act grants Ofcom powers to require technology companies to identify and remove child sexual abuse material, raising questions about end-to-end encryption in operator-branded communication services.
United States
The US maintains a distinct approach centred on Section 230 of the Communications Decency Act, which immunises platforms from liability for user-generated content while permitting voluntary content moderation. This protection remains foundational but faces mounting pressure from state legislatures and courts.
Texas HB20 and Florida SB7072 attempted to restrict platforms’ content moderation discretion, though federal courts have issued conflicting rulings on their constitutionality. California’s Age-Appropriate Design Code Act imposes child safety requirements that affect telecom services reaching minors. The FCC continues traditional telecom oversight but has largely remained separate from content moderation disputes.
For telecom operators with US operations, the practical reality is a patchwork: federal immunity under Section 230, state-level content and age-verification mandates, and ongoing litigation that may reshape the landscape by 2025-2026.
Brazil
Brazil’s Bill 2630/2020 (the “Fake News Bill”) proposes significant reforms to platform governance, including a central role for CGI.br (the Brazilian Internet Steering Committee) as a regulatory intermediary. The bill addresses content moderation, transparency, and platform accountability for digital platforms operating in Brazil.
Telecom operators are implicated through zero-rating arrangements that influence which content users can access without data charges. When operators provide preferential access to certain platforms, they become part of the content distribution chain that Bill 2630/2020 seeks to regulate.
India
India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 impose significant compliance obligations on intermediaries, including traceability requirements for messaging services and content takedown timelines. Mobile operators providing messaging or content hosting services must comply with these rules.
The Rules require significant social media intermediaries to appoint compliance officers, enable identification of first originators of certain messages, and remove unlawful content within specified timeframes. These data protection obligations intersect with broader questions about cross border data transfers and encryption.
Australia
Australia’s Online Safety Act 2021 and the industry codes registered with the eSafety Commissioner create a framework where industry and information technology sectors develop co-regulatory codes addressing online content. Mobile operators and ISPs are implicated when they provide services that host or distribute harmful content.
The eSafety Commissioner has powers to issue takedown notices and enforce compliance with registered codes. Telecom operators must understand which codes apply to their services and ensure appropriate detection and response mechanisms.
Telecom operators as content gatekeepers: practical moderation touchpoints
Telecom networks increasingly perform “edge” and value-added functions that create de facto content moderation responsibilities. Understanding where these touchpoints arise is essential for compliance teams.
DNS and network-level blocking
Court-ordered site blocks remain a primary intersection between telecom operations and content moderation. Operators implement DNS blocking for:
-
Piracy enforcement: Copyright holders obtain injunctions requiring ISPs to block access to infringing sites
-
Gambling restrictions: National regulators mandate blocking of unlicensed gambling platforms
-
National security: Government orders to restrict access to designated terrorist or extremist content
-
Online safety: Blocking of sites hosting child sexual abuse material under legal mandates
Each blocking order requires operators to balance compliance with over-blocking risks. EU and Indian jurisprudence has addressed proportionality concerns, requiring that blocking measures not inadvertently restrict access to lawful content.
Zero-rating and traffic management
Zero-rating arrangements—where certain content doesn’t count against data caps—create content visibility decisions. When an operator zero-rates one streaming service over another, it influences user consumption patterns. Net neutrality rules in the EU restrict such arrangements, but enforcement varies globally.
These choices intersect with content moderation because they affect which platforms gain user attention. A zero-rated social media platform may become the dominant source of information for users on limited data plans, amplifying any content moderation failures on that platform.
Child-safe and “clean pipe” products
Operators have offered parental control and content filtering products since roughly 2015. These services now intersect with:
-
UK Age Appropriate Design Code requirements for services likely to be accessed by children
-
DSA transparency requirements obligating disclosure of how content filtering systems operate
-
National age-assurance mandates requiring verification before access to certain content categories
Operators must ensure their filtering products align with these frameworks while avoiding over-blocking that restricts legitimate speech.
Messaging and encrypted communications
Telecom operators increasingly run proprietary messaging apps or partner with messaging platforms. The tension between end-to-end encryption and content scanning obligations is acute:
-
The UK Online Safety Act debates around client-side scanning
-
EU proposals for detecting child sexual abuse material in encrypted communications
-
National security requirements for lawful access
Operators must navigate these conflicting demands while maintaining user trust in secure communication services.
App stores, streaming, and cloud services
When telecom operators run app stores, IPTV services, or cloud marketplaces, they may qualify as online platforms or online marketplaces under the DSA. This triggers:
-
Algorithmic transparency obligations for recommender systems
-
Out-of-court dispute settlement requirements
-
Traceability duties for business users
-
Consumer protection measures for marketplace transactions
Each product line requires classification against regulatory definitions to determine applicable obligations.
Regulatory intermediaries and hybrid governance in content moderation
Content moderation in Europe and globally increasingly operates through hybrid governance structures involving regulatory intermediaries positioned between state regulators and regulated entities. Understanding these arrangements is essential for telecom compliance teams.
The regulatory intermediary model
Recent scholarship on digital governance describes a “regulatory intermediary triangle” where entities other than direct regulators play structured roles in compliance and enforcement. The DSA formalises this through certified out-of-court dispute settlement (ODS) bodies under Article 21, enabling users to challenge content moderation decisions through independent mechanisms.
National Digital Services Coordinators serve as primary enforcement nodes, with the European Commission retaining direct authority over VLOPs and VLOSEs. This creates a layered system where telecom operators may face scrutiny from multiple institutional actors simultaneously.
EU out-of-court dispute bodies
The DSA requires platforms to cooperate with certified ODS bodies that provide alternative resolution for content moderation disputes. Appeals Centre Europe, certified in Ireland, represents one such body that platforms—including telecom-operated services—must engage with when users contest moderation decisions.
Key design considerations for telecom compliance teams include:
-
Independence and funding: Ensuring engagement with genuinely independent bodies
-
Cross-border recognition: Understanding which ODS decisions apply across jurisdictions
-
Consistency risks: Managing potentially conflicting outcomes from different national bodies
Brazil’s CGI.br model
Bill 2630/2020 proposes positioning CGI.br as a multi-stakeholder regulatory intermediary with authority over content moderation standards and transparency requirements. For telecom operators with Brazilian operations, this represents a distinct governance model combining technical and policy coordination.
Industry codes and multi-stakeholder fora
Beyond formal regulatory intermediaries, telecom operators interact with:
-
Australian industry codes registered with the eSafety Commissioner, developed by industry associations
-
India’s Digital Ethics Committees under the IT Rules 2021
-
Regional telecom associations in Africa and Asia developing shared content moderation practices
-
GSMA initiatives on child safety, scam prevention, and trusted communications
Examples of hybrid governance in practice
Several models illustrate how non-state entities shape content moderation:
-
Meta’s Oversight Board: An independent body making binding decisions on specific content cases, influencing platform governance beyond regulatory mandates
-
EU codes of conduct: Co-regulatory arrangements on illegal hate speech, where platforms commit to review most flagged content within 24 hours
-
Telecom sector codes: Industry agreements on child-safety filters, scam-call mitigation, and AI generated content identification
For telecom compliance leaders, these hybrid arrangements require engagement beyond traditional regulatory relationships. Participation in code development, ODS body cooperation, and industry initiative alignment all become compliance-relevant activities.
Designing telecom-grade content moderation and risk frameworks
Telecom operators bring established risk management disciplines to content moderation challenges. Network resilience planning, lawful interception frameworks, and emergency services coordination provide foundations for addressing illegal content, misinformation, scams, and child safety at scale.
A practical framework: governance, processes, tools
Building effective content moderation and risk frameworks requires a structured approach across three layers:
Layer 1: Governance
-
Executive-level ownership of content moderation policy, with clear accountability
-
Cross-functional coordination between regulatory affairs, legal, security, and product teams
-
Regular policy updates reflecting new laws, regulatory guidance, and threat evolution
-
Board-level reporting on systemic risks and compliance status
Layer 2: Processes
-
Service classification mapping each product against DSA, Online Safety Act, and national law definitions
-
Notice-and-action workflows with documented timelines for response to illegal content reports
-
User appeal mechanisms compatible with ODS bodies and regulatory expectations
-
Incident response playbooks for coordinated action on harmful content surges
-
Know-your-business-customer checks for resellers and business partners
Layer 3: Technical and operational controls
-
Automated detection systems for prohibited content categories, with human review for edge cases
-
Scam and fraud detection integrated with national telecom fraud initiatives
-
Age-assurance mechanisms aligned with child safety requirements
-
Transparency reporting systems generating DSA-compliant disclosures
-
Data access infrastructure for regulator and researcher requests
Reconciling net neutrality and content restrictions
Telecom operators face inherent tension between net neutrality rules prohibiting traffic discrimination and court-ordered or regulatory mandates requiring content restrictions. Practical approaches include:
-
Clear legal basis documentation: Ensuring every blocking action corresponds to a specific legal order or regulatory mandate
-
Proportionality assessments: Evaluating over-blocking risks before implementing broad restrictions
-
Transparency reporting: Disclosing volume and categories of content restrictions in regulatory filings
-
Appeal pathways: Providing mechanisms for affected parties to challenge incorrect blocks
EU and Indian jurisprudence provides guidance on proportionality standards, with courts increasingly scrutinising blocking orders that restrict access beyond targeted illegal content.
Cross-border consistency
Global telecom operators need unified internal frameworks that accommodate jurisdictional variation. Effective approaches include:
-
Unified content taxonomy: One internal classification system for “illegal vs harmful but legal” content, with jurisdiction-specific mappings
-
Global incident response playbook: Core processes applying worldwide, with regional modules addressing local requirements
-
Stricter-standard compliance: Building to the highest applicable standard where feasible, reducing complexity from jurisdiction-specific variations
-
Regional add-ons: Specific procedures for Germany’s NetzDG, France’s anti-hate laws, Australia’s Online Safety Act codes, and other national requirements
This modular approach enables consistent governance while accommodating the realities of multi-jurisdictional operations.
Future directions: AI, encryption, and democracy-sensitive content
The content moderation landscape continues evolving rapidly, with three developments demanding particular attention from telecom policy teams: AI systems and synthetic media, encryption policy conflicts, and democracy-protection measures.
Generative AI and synthetic media
The emergence of AI generated content—deepfakes, synthetic voice, and AI-powered text generation—transforms the harmful content landscape. Telecom networks carry this content; some operators host AI chatbot platforms or integrate generative AI into customer services.
The EU AI Act, which reached political agreement in late 2023 with phased application beginning 2025-2026, establishes requirements for AI developers and deployers. High-risk AI systems face conformity assessments, transparency obligations, and human oversight requirements. The European AI Office coordinates enforcement across the AI regulation framework.
China’s generative AI regulations under the Cyberspace Administration (2023) take a different approach, requiring pre-deployment review for services offered to the public. Telecom operators with cross-border services must navigate these divergent frameworks.
Synthetic media creates specific challenges for content moderation. Detecting AI-generated content at scale remains technically difficult, and the volume of such content on telecom-carried services will only grow. The AI act and emerging industry standards address disclosure requirements, but detection and labelling remain works in progress.
Encryption and child safety tensions
The debates around scanning encrypted communications for child sexual abuse material intensified throughout 2023-2024. The UK Online Safety Act grants Ofcom powers that could theoretically require detection technologies in encrypted services, though implementation remains contested.
Parallel EU proposals (the CSA Regulation) would require providers to detect, report, and remove such material, potentially affecting end-to-end encrypted messaging services. Telecom operators running messaging platforms face the prospect of conflicting obligations between user protection through encryption and child protection through content scanning.
Standard-setting bodies including ETSI and 3GPP are engaging with these questions, exploring technical approaches that might reconcile privacy and safety objectives. Telecom operators can shape these standards through active participation.
Election integrity and political content
The DSA Elections Toolkit and national disinformation task forces impose specific obligations during electoral periods. VLOPs face enhanced duties around political advertising transparency, disinformation monitoring, and rapid response to election-related harms.
Telecom operators may encounter these requirements through:
-
Ad networks: Telecom-owned advertising platforms may face political ad disclosure rules
-
Messaging channels: Viral disinformation spreading through telecom-branded messaging services
-
DNS-level interventions: Requests to block disinformation sources during election periods, raising significant free expression concerns
Scenario-based planning helps compliance teams prepare for these situations. Consider:
-
A mobile operator hosting an AI-powered chatbot platform that generates political content
-
An ISP receiving government requests for DNS-level blocks during an election
-
A cross-border cloud communication service receiving conflicting takedown orders from different national authorities
Each scenario requires pre-established decision frameworks, escalation paths, and documentation practices.
Engaging proactively in standard-setting
Telecom operators have opportunities to shape technically realistic content moderation practices through participation in:
-
ETSI: European standards for lawful intercept, security, and emerging content detection technologies
-
3GPP: Mobile network standards increasingly addressing trust and safety considerations
-
GSMA initiatives: Industry coordination on scam prevention, AI technologies, and trusted communications
-
National and regional fora: Consultation responses and code development participation
Operators who engage proactively can influence standards toward approaches that balance regulatory objectives with technical feasibility and user rights.
Preparing for 2025-2026
The full digital policy alert across major jurisdictions suggests intensifying requirements through 2026. The AI Act’s phased application, the UK Online Safety Act’s code finalisation, and DSA enforcement maturation will create cumulative compliance demands.
Telecom operators building modular, jurisdiction-aware compliance frameworks now will be better positioned to adapt as requirements evolve. Those treating content moderation as a siloed compliance exercise rather than an integrated governance function will struggle with the scale and complexity of obligations ahead.
The digital policy alert entry points are clear: classify your services, map your obligations, build cross-functional governance, and engage with the standard-setting processes that will shape technically realistic implementation of these ambitious regulatory frameworks.