The expansion transforms private messaging into government-monitored infrastructure through AI-powered surveillance systems
The United Kingdom has crossed a significant threshold in digital surveillance policy. On January 8, 2026, new regulations under the Online Safety Act took effect, legally requiring digital platforms to deploy automated scanning systems that monitor, detect, and block user content before it reaches its intended recipients.
The Online Safety Act 2023 (Priority Offenses) (Amendment) Regulations 2025 designates âcyberflashingâ and âencouraging or assisting serious self-harmâ as priority offensesâcategories that trigger the strictest compliance duties under the Act. This isnât incremental policy adjustment; itâs a fundamental restructuring of how digital communication works in the UK.
The Technical Reality: Client-Side Scanning at Scale
Under these new rules, companies operating messaging apps, social media platforms, search engines, and forums must implement systems capable of:
- Real-time content analysis: AI models that evaluate text, images, and videos as theyâre created or uploaded- Preemptive blocking: Automatic suppression of flagged content before transmission- Continuous monitoring: Background surveillance across all user interactions- Mass scanning infrastructure: Systems that process communications at population scale
The UK Department for Science, Innovation and Technology released promotional material showing a smartphone automatically scanning AirDropped photos and warning users about âunwanted nudes.â While framed as protecting women and girls from cyberflashing, the technical requirements extend far beyond this single use case.
Compliance essentially mandates that platforms deploy what security researchers call âclient-side scanningâ (CSS)âtechnology that analyzes data on user devices before encryption or after decryption, effectively bypassing end-to-end encrypted communications without technically breaking the encryption itself.
The Encryption Dilemma
Hereâs where technical reality meets policy fantasy. The Act gives Ofcom, the UK communications regulator, authority to require platforms using end-to-end encryption to deploy âaccredited technologyâ for content detection. The problem? This technology doesnât exist without fundamentally compromising encryptionâs security guarantees.
A 2021 study by security researchers from Cambridge, Johns Hopkins, MIT, Stanford, and other institutionsâpublished in the Journal of Cybersecurityâconcluded that client-side scanning âby its nature creates serious security and privacy risks for all society, while the assistance it can provide for law enforcement is at best problematic.â
The UK government has acknowledged this reality. In September 2023, Minister Lord Parkinson stated that controversial powers allowing Ofcom to break end-to-end encryption would not be used immediately. However, nothing in the Act prevents Ofcom from issuing such notices in the future. Several messaging providers, including Signal and Element, have indicated they would withdraw from the UK market rather than implement scanning systems.
False Positives and Error Rates
AI content detection systems are notoriously imprecise. Research consistently shows these systems produce high error rates, particularly when tasked with contextual judgments about intent, harm, or legality. A European Parliament impact assessment found that current CSS solutions result in unacceptably high false positive rates.
What does this mean in practice?
- Legitimate medical images flagged as pornographic- Art history content blocked as inappropriate- Mental health discussions misidentified as self-harm encouragement- Context-dependent communication stripped of nuance
Each false positive represents someoneâs private communication being surfaced for human review by platform moderators or potentially law enforcementâwithout warrant, without suspicion, without due process.
The Surveillance Infrastructure Problem
Perhaps the most concerning aspect isnât what these systems do today, but what they enable tomorrow. Once client-side scanning infrastructure is deployed across devices and platforms, thereâs no technical limitation preventing its repurposing for broader surveillance.
The Internet Architecture Board stated in their official position that mandatory client-side scanning âcreates a tool that is straightforward to abuse as a widespread facilitator of surveillance and censorship.â They note that by design, there is no technical way to limit the scope and intent of scanning, nor curtail subsequent changes in scope or intent.
Consider the precedent: scanning infrastructure justified for child protection today could be expanded to monitor:
- Political dissent- Journalistic sources- Trade secrets- Religious expression- LGBTQ+ support communications- Whistleblower disclosures
The Act already includes provisions for âfalse communicationâ that could be used to suppress speech the government deems misinformation. The categorization system prioritizes platforms by user count rather than potential for harm, suggesting surveillance capacityânot risk mitigationâmay be the actual priority.
Penalties: The Compliance Hammer
Companies that fail to implement these surveillance systems face severe consequences:
- Fines up to 10% of global annual revenue or ÂŁ18 million, whichever is greater- Potential service blocking in the UK- Criminal sanctions for senior managers in extreme cases
For context, a 10% revenue penalty would represent:
- Meta (Facebook/Instagram): ~$13.5 billion- Google: ~$30 billion- Apple: ~$38 billion
These arenât regulatory slaps on the wristâtheyâre existential threats designed to ensure compliance regardless of technical feasibility or privacy concerns.
The Security Community Response
The technical communityâs response has been consistently critical. Over 70 information security and privacy academics signed an open letter stating the Act undermines privacy and safety online. Security researchers warn that:
Weakened encryption creates systemic vulnerabilities: Once scanning infrastructure is embedded in communication systems, it becomes an attack vector that malicious actors can exploit. Encryption backdoors and scanning systems donât discriminate between government use and criminal exploitation.
Trust model collapse: End-to-end encryptionâs security depends on users trusting that only intended recipients can access their communications. Client-side scanning fundamentally breaks this trust model, even if encryption remains technically intact during transmission.
Function creep risk: Systems designed for one purpose inevitably expand. The same scanning infrastructure used for cyberflashing detection becomes reusable for any content category a future government deems problematic.
National security implications: Former U.S. national security and law enforcement officials have highlighted how widespread encryption is crucial for securing digital infrastructure. Weakening itâeven with good intentionsâcreates national security vulnerabilities.
What About the Legitimate Goals?
The Actâs stated purposesâpreventing cyberflashing and self-harm encouragementârepresent genuine social concerns. Women and girls do face harassment through unsolicited sexual images. Vulnerable individuals do encounter harmful content online.
But the question isnât whether these problems exist. Itâs whether mass surveillance infrastructure represents a proportionate, effective, or safe solution.
Alternative approaches exist:
- Improved reporting mechanisms: Making it easier for users to report abuse and ensuring swift action- Opt-in scanning: Allowing users who want protection to enable it, rather than mandating universal monitoring- Server-side filtering: Platforms already scan uploaded content on their servers; this doesnât require breaking encryption- Education and digital literacy: Teaching users how to use privacy controls and recognize threats- Law enforcement capacity building: Better training for investigating reports of online abuse using existing legal tools
None of these alternatives require converting every smartphone and computer into a surveillance device.
The Geopolitical Precedent
The UK isnât alone in pursuing these policies. The European Unionâs proposed âChat Controlâ legislation follows a similar trajectory, potentially mandating even broader scanning requirements. Australia has enacted comparable measures. Several authoritarian governments are watching closely, eager to adopt âpublic safetyâ frameworks that democracies have legitimized.
When the UKâa G7 democracyâimplements mandatory surveillance infrastructure, it provides cover for authoritarian regimes to do the same. The difference is that democratic governments at least face public accountability and judicial oversight. Authoritarian states face no such constraints.
Security researchers note that authoritarian governments already attempt to copy Western surveillance playbooks. Proton, the Swiss encrypted email provider, states it hasnât broken encryption for governments in China or Iran, and wonât for the UK government eitherârecognizing that doing so would endanger users globally.
What Happens Next?
Ofcom must now develop codes of practice specifying exactly what steps platforms must take to comply. This consultation process represents a critical juncture. Ofcom has stated it will âstrike an appropriate balance, intervening to protect users from harm where necessary, while ensuring that regulation appropriately protects privacy and freedom of expression, and promotes innovation.â
The government itself has acknowledged that if appropriate technology doesnât exist, Ofcom cannot require its use. In theory, Ofcom could determine that no technology can satisfy the Actâs requirements without endangering privacy and security.
In practice, political pressure to âdo somethingâ about online harms may override technical reality.
Several outcomes are possible:
Platform withdrawal: Some services, particularly privacy-focused ones, may geoblock UK users rather than implement scanning Split offerings: Companies might offer degraded services to UK users while maintaining encryption elsewhere Legal challenges: Judicial reviews may test whether the Actâs requirements violate human rights obligations Technical compliance theater: Platforms might implement minimal scanning systems that satisfy legal requirements without actually working effectively Full implementation: Widespread deployment of client-side scanning across major platforms, fundamentally altering the global internet
For Security Professionals
If youâre responsible for securing communications or advising on privacy policy, this regulatory shift demands attention:
Risk assessment updates: Organizations using affected platforms need to reassess their data protection and communications security postures. If platforms implement scanning, what does that mean for attorney-client privilege, medical information, trade secrets, or journalistic sources?
Alternative secure channels: Consider whether your organization needs communication channels that arenât subject to UK jurisdiction or scanning requirements.
Policy advocacy: The security communityâs voice matters in these debates. Technical experts need to clearly articulate the risks to policymakers who may not understand the implications.
Monitoring compliance approaches: Watch how platforms actually implement these requirements. The gap between legal mandates and technical capabilities will be revealing.
The Broader Pattern
The Online Safety Actâs expansion is part of a broader pattern in democracies worldwide: sacrificing digital security and privacy for the appearance of safety. The pattern includes:
- Age verification systems that create massive databases of citizensâ identities tied to their online activities- Data retention mandates requiring ISPs to log all customer communications- Encryption backdoors justified by child protection or counterterrorism- Content moderation requirements that incentivize over-censorship
Each policy individually seems reasonable to non-technical audiences. Collectively, theyâre building surveillance infrastructure that fundamentally changes the nature of digital communication from private by default to monitored by design.
đ§ Related Podcast Episode
Conclusion: Surveillance as Infrastructure
What makes the Online Safety Act expansion particularly significant isnât just what it requires today, but how it embeds surveillance capabilities into communication infrastructure going forward. Once scanning systems are deployed, once the technical and legal precedents are established, once platforms have built the capability to monitor all user contentâthe question isnât whether that capability will be expanded, but when and how.
The Act positions large sections of the internet under continuous monitoring, with user privacy treated as a secondary concern rather than a fundamental right. It demands that companies make moral judgments in real time through automated systems incapable of understanding context, intent, or nuance.
Perhaps most troublingly, it demonstrates that democratic governments are willing to mandate the same surveillance infrastructure they publicly criticize authoritarian regimes for deployingâso long as the justification seems sufficiently compelling.
The cybersecurity community has a responsibility to make clear what this trade-off actually entails. Not because child protection isnât important, not because online harassment isnât real, but because the solution being implemented may ultimately make everyone less safe while providing only the illusion of protection.
When you build surveillance infrastructure into communication systems, you donât just monitor the bad guys. You monitor everyone. And once that infrastructure exists, thereâs no technical way to ensure itâs only used for its original purpose.
The UKâs expansion of the Online Safety Act isnât about protecting women and children online. Itâs about normalizing surveillance as the default state of digital communication. And that precedent, once established, will be extraordinarily difficult to reverse.
For security professionals, the critical question isnât whether your organization must comply with the Online Safety Actâs requirementsâbut whether youâre prepared for the security implications when platforms implement the surveillance infrastructure these regulations demand.