The expansion transforms private messaging into government-monitored infrastructure through AI-powered surveillance systems

The United Kingdom has crossed a significant threshold in digital surveillance policy. On January 8, 2026, new regulations under the Online Safety Act took effect, legally requiring digital platforms to deploy automated scanning systems that monitor, detect, and block user content before it reaches its intended recipients.

The Online Safety Act 2023 (Priority Offenses) (Amendment) Regulations 2025 designates “cyberflashing” and “encouraging or assisting serious self-harm” as priority offenses—categories that trigger the strictest compliance duties under the Act. This isn’t incremental policy adjustment; it’s a fundamental restructuring of how digital communication works in the UK.

The UK’s Digital Dragnet: How the Online Safety Act Expansion Turns Every Message Into Government-Monitored Data

The Technical Reality: Client-Side Scanning at Scale

Under these new rules, companies operating messaging apps, social media platforms, search engines, and forums must implement systems capable of:

  • Real-time content analysis: AI models that evaluate text, images, and videos as they’re created or uploaded- Preemptive blocking: Automatic suppression of flagged content before transmission- Continuous monitoring: Background surveillance across all user interactions- Mass scanning infrastructure: Systems that process communications at population scale

The UK Department for Science, Innovation and Technology released promotional material showing a smartphone automatically scanning AirDropped photos and warning users about “unwanted nudes.” While framed as protecting women and girls from cyberflashing, the technical requirements extend far beyond this single use case.

Compliance essentially mandates that platforms deploy what security researchers call “client-side scanning” (CSS)—technology that analyzes data on user devices before encryption or after decryption, effectively bypassing end-to-end encrypted communications without technically breaking the encryption itself.

The Encryption Dilemma

Here’s where technical reality meets policy fantasy. The Act gives Ofcom, the UK communications regulator, authority to require platforms using end-to-end encryption to deploy “accredited technology” for content detection. The problem? This technology doesn’t exist without fundamentally compromising encryption’s security guarantees.

A 2021 study by security researchers from Cambridge, Johns Hopkins, MIT, Stanford, and other institutions—published in the Journal of Cybersecurity—concluded that client-side scanning “by its nature creates serious security and privacy risks for all society, while the assistance it can provide for law enforcement is at best problematic.”

The UK government has acknowledged this reality. In September 2023, Minister Lord Parkinson stated that controversial powers allowing Ofcom to break end-to-end encryption would not be used immediately. However, nothing in the Act prevents Ofcom from issuing such notices in the future. Several messaging providers, including Signal and Element, have indicated they would withdraw from the UK market rather than implement scanning systems.

False Positives and Error Rates

AI content detection systems are notoriously imprecise. Research consistently shows these systems produce high error rates, particularly when tasked with contextual judgments about intent, harm, or legality. A European Parliament impact assessment found that current CSS solutions result in unacceptably high false positive rates.

What does this mean in practice?

  • Legitimate medical images flagged as pornographic- Art history content blocked as inappropriate- Mental health discussions misidentified as self-harm encouragement- Context-dependent communication stripped of nuance

Each false positive represents someone’s private communication being surfaced for human review by platform moderators or potentially law enforcement—without warrant, without suspicion, without due process.

The Surveillance Infrastructure Problem

Perhaps the most concerning aspect isn’t what these systems do today, but what they enable tomorrow. Once client-side scanning infrastructure is deployed across devices and platforms, there’s no technical limitation preventing its repurposing for broader surveillance.

The Internet Architecture Board stated in their official position that mandatory client-side scanning “creates a tool that is straightforward to abuse as a widespread facilitator of surveillance and censorship.” They note that by design, there is no technical way to limit the scope and intent of scanning, nor curtail subsequent changes in scope or intent.

Consider the precedent: scanning infrastructure justified for child protection today could be expanded to monitor:

  • Political dissent- Journalistic sources- Trade secrets- Religious expression- LGBTQ+ support communications- Whistleblower disclosures

The Act already includes provisions for “false communication” that could be used to suppress speech the government deems misinformation. The categorization system prioritizes platforms by user count rather than potential for harm, suggesting surveillance capacity—not risk mitigation—may be the actual priority.

Penalties: The Compliance Hammer

Companies that fail to implement these surveillance systems face severe consequences:

  • Fines up to 10% of global annual revenue or ÂŁ18 million, whichever is greater- Potential service blocking in the UK- Criminal sanctions for senior managers in extreme cases

For context, a 10% revenue penalty would represent:

  • Meta (Facebook/Instagram): ~$13.5 billion- Google: ~$30 billion- Apple: ~$38 billion

These aren’t regulatory slaps on the wrist—they’re existential threats designed to ensure compliance regardless of technical feasibility or privacy concerns.

The Security Community Response

The technical community’s response has been consistently critical. Over 70 information security and privacy academics signed an open letter stating the Act undermines privacy and safety online. Security researchers warn that:

Weakened encryption creates systemic vulnerabilities: Once scanning infrastructure is embedded in communication systems, it becomes an attack vector that malicious actors can exploit. Encryption backdoors and scanning systems don’t discriminate between government use and criminal exploitation.

Trust model collapse: End-to-end encryption’s security depends on users trusting that only intended recipients can access their communications. Client-side scanning fundamentally breaks this trust model, even if encryption remains technically intact during transmission.

Function creep risk: Systems designed for one purpose inevitably expand. The same scanning infrastructure used for cyberflashing detection becomes reusable for any content category a future government deems problematic.

National security implications: Former U.S. national security and law enforcement officials have highlighted how widespread encryption is crucial for securing digital infrastructure. Weakening it—even with good intentions—creates national security vulnerabilities.

What About the Legitimate Goals?

The Act’s stated purposes—preventing cyberflashing and self-harm encouragement—represent genuine social concerns. Women and girls do face harassment through unsolicited sexual images. Vulnerable individuals do encounter harmful content online.

But the question isn’t whether these problems exist. It’s whether mass surveillance infrastructure represents a proportionate, effective, or safe solution.

Alternative approaches exist:

  • Improved reporting mechanisms: Making it easier for users to report abuse and ensuring swift action- Opt-in scanning: Allowing users who want protection to enable it, rather than mandating universal monitoring- Server-side filtering: Platforms already scan uploaded content on their servers; this doesn’t require breaking encryption- Education and digital literacy: Teaching users how to use privacy controls and recognize threats- Law enforcement capacity building: Better training for investigating reports of online abuse using existing legal tools

None of these alternatives require converting every smartphone and computer into a surveillance device.

The Geopolitical Precedent

The UK isn’t alone in pursuing these policies. The European Union’s proposed “Chat Control” legislation follows a similar trajectory, potentially mandating even broader scanning requirements. Australia has enacted comparable measures. Several authoritarian governments are watching closely, eager to adopt “public safety” frameworks that democracies have legitimized.

When the UK—a G7 democracy—implements mandatory surveillance infrastructure, it provides cover for authoritarian regimes to do the same. The difference is that democratic governments at least face public accountability and judicial oversight. Authoritarian states face no such constraints.

Security researchers note that authoritarian governments already attempt to copy Western surveillance playbooks. Proton, the Swiss encrypted email provider, states it hasn’t broken encryption for governments in China or Iran, and won’t for the UK government either—recognizing that doing so would endanger users globally.

What Happens Next?

Ofcom must now develop codes of practice specifying exactly what steps platforms must take to comply. This consultation process represents a critical juncture. Ofcom has stated it will “strike an appropriate balance, intervening to protect users from harm where necessary, while ensuring that regulation appropriately protects privacy and freedom of expression, and promotes innovation.”

The government itself has acknowledged that if appropriate technology doesn’t exist, Ofcom cannot require its use. In theory, Ofcom could determine that no technology can satisfy the Act’s requirements without endangering privacy and security.

In practice, political pressure to “do something” about online harms may override technical reality.

Several outcomes are possible:

Platform withdrawal: Some services, particularly privacy-focused ones, may geoblock UK users rather than implement scanning Split offerings: Companies might offer degraded services to UK users while maintaining encryption elsewhere Legal challenges: Judicial reviews may test whether the Act’s requirements violate human rights obligations Technical compliance theater: Platforms might implement minimal scanning systems that satisfy legal requirements without actually working effectively Full implementation: Widespread deployment of client-side scanning across major platforms, fundamentally altering the global internet

For Security Professionals

If you’re responsible for securing communications or advising on privacy policy, this regulatory shift demands attention:

Risk assessment updates: Organizations using affected platforms need to reassess their data protection and communications security postures. If platforms implement scanning, what does that mean for attorney-client privilege, medical information, trade secrets, or journalistic sources?

Alternative secure channels: Consider whether your organization needs communication channels that aren’t subject to UK jurisdiction or scanning requirements.

Policy advocacy: The security community’s voice matters in these debates. Technical experts need to clearly articulate the risks to policymakers who may not understand the implications.

Monitoring compliance approaches: Watch how platforms actually implement these requirements. The gap between legal mandates and technical capabilities will be revealing.

The Broader Pattern

The Online Safety Act’s expansion is part of a broader pattern in democracies worldwide: sacrificing digital security and privacy for the appearance of safety. The pattern includes:

  • Age verification systems that create massive databases of citizens’ identities tied to their online activities- Data retention mandates requiring ISPs to log all customer communications- Encryption backdoors justified by child protection or counterterrorism- Content moderation requirements that incentivize over-censorship

Each policy individually seems reasonable to non-technical audiences. Collectively, they’re building surveillance infrastructure that fundamentally changes the nature of digital communication from private by default to monitored by design.

Conclusion: Surveillance as Infrastructure

What makes the Online Safety Act expansion particularly significant isn’t just what it requires today, but how it embeds surveillance capabilities into communication infrastructure going forward. Once scanning systems are deployed, once the technical and legal precedents are established, once platforms have built the capability to monitor all user content—the question isn’t whether that capability will be expanded, but when and how.

The Act positions large sections of the internet under continuous monitoring, with user privacy treated as a secondary concern rather than a fundamental right. It demands that companies make moral judgments in real time through automated systems incapable of understanding context, intent, or nuance.

Perhaps most troublingly, it demonstrates that democratic governments are willing to mandate the same surveillance infrastructure they publicly criticize authoritarian regimes for deploying—so long as the justification seems sufficiently compelling.

The cybersecurity community has a responsibility to make clear what this trade-off actually entails. Not because child protection isn’t important, not because online harassment isn’t real, but because the solution being implemented may ultimately make everyone less safe while providing only the illusion of protection.

When you build surveillance infrastructure into communication systems, you don’t just monitor the bad guys. You monitor everyone. And once that infrastructure exists, there’s no technical way to ensure it’s only used for its original purpose.

The UK’s expansion of the Online Safety Act isn’t about protecting women and children online. It’s about normalizing surveillance as the default state of digital communication. And that precedent, once established, will be extraordinarily difficult to reverse.


For security professionals, the critical question isn’t whether your organization must comply with the Online Safety Act’s requirements—but whether you’re prepared for the security implications when platforms implement the surveillance infrastructure these regulations demand.