The numbers don’t lie: Germany’s own data proves the EU’s proposed “Chat Control” surveillance system would flood police with false reports while decimating digital privacy.


Germany just handed the European Union an inconvenient truth that undermines the entire foundation of its controversial “Chat Control” proposal. In 2024 alone, nearly half of all tips Germany received through the existing voluntary scanning system were false alarms. According to the Federal Criminal Police Office (BKA), 99,375 of the 205,728 reports forwarded by the US-based National Center for Missing and Exploited Children (NCMEC) were not criminally relevant, an error rate of 48.3%.

The EU Could Be Scanning Your Chats by October 2025 – Here’s Everything We Know

These aren’t just statistics—they represent 99,375 instances of innocent personal content being falsely flagged and forwarded to authorities in a single year. Family beach photos. Medical images shared with doctors. Artistic photography. All mistakenly classified as potential child sexual abuse material (CSAM) and sent to law enforcement for investigation.

But here’s the kicker: this catastrophic failure rate comes from the current voluntary system that doesn’t even scan encrypted messages. And now the EU wants to make it mandatory for everyone, everywhere, all the time—including your private WhatsApp and Signal conversations.

The Current System Is Already Broken

Under the current “Chat Control 1.0” framework, this system does not apply to end-to-end encrypted services and is not mandatory. Only some platforms like Gmail, Facebook Messenger, and Skype participate voluntarily. Many of these reports are generated by private tech companies such as Meta, Microsoft, and Google, which scan users’ communications voluntarily for possible child sexual abuse material (CSAM) and pass them to NCMEC.

The false positive problem isn’t unique to Germany. In Ireland, 852 out of 4192 (20.3%) of the reports received by the Irish police forces turned out to be actual exploitation material, with 471 (11.2%) being marked as false positives. High false-positive rates—up to 80% in countries like Switzerland—risk wrongful accusations.

LinkedIn found 75 accounts that were reported to EU authorities in the second half of 2021, due to files that it matched with known CSAM. But upon manual review, only 31 of those cases involved confirmed CSAM. That’s nearly 60% false positives, and LinkedIn uses PhotoDNA—the same technology specifically recommended by US lawmakers.

EU Chat Control: Opposition Grows as September 12 Deadline Looms

The EU’s Plan Makes Everything Worse

Despite this damning evidence, Denmark, which holds the rotating presidency of the European Council from July 1 until the end of 2025, has made Chat Control a top priority. The next Council debate and vote on chat control will reportedly take place on 14 October, where Danish leaders will attempt to secure adoption.

The proposed “Chat Control 2.0” would be far more invasive than the current broken system:

Mass Scanning: It requires providers to scan chats, messages, emails, and cloud storage, breaking end-to-end encryption on platforms like WhatsApp and Signal.

Client-Side Surveillance: This scanning would happen on users’ devices through “client-side scanning” technology, effectively creating backdoors in encrypted communications.

AI Guesswork: The Commission’s original proposal contains a requirement that platforms, once served with a detection order, must scan people’s messages, not just for known CSAM but also for unknown CSAM. This would further ramp up the technical challenge of detecting illegal content with a high degree of accuracy and low false positives.

Text Analysis: A further component in the Commission’s proposal requires platforms to identify grooming activity in real time. This means, in addition to scanning imagery uploads for CSAM, apps would need to be able to parse the contents of users’ communications to try to understand when an adult user might be trying to lure a minor to engage in sexual activity.

The Math Doesn’t Add Up

Security experts have warned that expanding this broken system would create millions of false positives per day. Given the large amount of messages sent in these platforms (in the order of billions), one can expect a very large amount of false alarms (in the order of millions), according to a letter signed by 270 academics and security researchers.

To get the false positives down to the hundreds, statistically one would have to identify at least 5 repetitions using different, statistically independent images or detectors. And this is only for WhatsApp — if we consider other messaging platforms, including email, the number of necessary repetitions would grow significantly.

This isn’t just a technical problem—it’s a human rights disaster waiting to happen. Millions of innocent people could be erroneously implicated in suspicious activity, burdening law enforcement with a pipeline of false reports.

According to the Council Legal Service, the Danish Chat Control proposal still violates human rights and “the core problems of access to communication for potentially all users remained unchanged.”

In February 2024, the European Court of Human Rights ruled, in an unrelated case, that requiring degraded end-to-end encryption “cannot be regarded as necessary in a democratic society”. The writing is on the wall: mass surveillance of private communications violates fundamental human rights.

The EU legal advisors have concluded that European Chat Control proposals which would require tech companies to scan private and encrypted messages for child abuse material (CSAM) are in breach with EU law.

https://t.co/ahakJAV3aT— Global Government Affairs (@GlobalAffairs) September 4, 2025

Major Platforms Are Ready to Leave

The tech industry isn’t bluffing about the consequences. Signal and other privacy-focused platforms have threatened to shut down their European operations rather than implement scanning technology that would undermine their security promises. Telegram pledges to exit the market rather than “undermine encryption with backdoors”.

When the most privacy-conscious platforms abandon Europe, users won’t switch to surveilled alternatives—they’ll find workarounds. Criminals will simply use encrypted ZIP files, foreign platforms, or custom encryption. Meanwhile, ordinary citizens lose access to secure communication tools that protect journalists, activists, abuse victims, and anyone who values privacy.

The Political Chess Game

The Danish presidency, which began in July 2025, has reignited momentum, reportedly securing the support of 19 of the 27 Member States. But the political landscape is shifting rapidly.

Many countries within the EU Council that helped block Chat Control in 2024 are now wavering. The following countries are undecided or unclear: Belgium, Czech Republic, Estonia, Finland, Germany, Greece, Slovenia, Luxembourg, Romania, while several formerly opposed governments such as France have already given up their opposition.

According to Breyer, Denmark crucially needs to manage to convince Germany of its proposed text. The new government has not yet taken a position at the time of writing. Germany’s decision could determine whether Europe embraces mass surveillance or maintains its commitment to digital rights.

Digital Compliance Alert: UK Online Safety Act and EU Digital Services Act Cross-Border Impact Analysis

Timeline: October Deadline Approaching

The critical dates are fast approaching:

  • September 12, 2025: Member States are expected to finalise their positions in Council working groups- October 14, 2025: Earliest possible Council vote set for 14 October 2025

If passed without robust safeguards, the law could establish a precedent for scanning private communications on a large scale, thereby reshaping the balance between online safety and privacy not just in Europe, but globally.

The Bottom Line

Germany’s 2024 data exposes the fundamental flaw in Chat Control: it doesn’t work. A system that generates nearly 100,000 false reports annually while scanning only voluntary, unencrypted platforms would become a privacy-destroying nightmare if expanded to mandatory, universal surveillance.

Mass surveillance is being implemented to combat abuse, even when the mechanisms of detecting abuse are inaccurate and people’s personal content is being falsely flagged and forwarded to authorities.

The EU faces a clear choice: abandon this technically broken, legally dubious, and fundamentally authoritarian approach, or transform Europe into a surveillance state where every private message, photo, and conversation is subject to algorithmic judgment and potential police investigation.

As X’s Global Government Affairs correctly notes, “CSAM can be addressed through less-intrusive means, including beefed up law enforcement and enhanced cooperation” rather than mass surveillance that “threatens European’s human rights, their right to privacy and freedom of expression” and “could expose people’s personal data to a host of actors and degrade cybersecurity for individuals, businesses, and governments globally.”

The assumption that every user is a potential suspect leads to widespread, suspicionless scanning, raising concerns that go well beyond the fight against abuse.

With Germany’s own law enforcement data proving the system’s failure, there’s no excuse for moving forward. The October vote will determine whether Europe remains a beacon of digital rights or becomes another surveillance state where privacy is sacrificed for the illusion of safety.


The October 14, 2025 vote represents a turning point for digital privacy in Europe. Contact your representatives now—before it’s too late to preserve encrypted communication for everyone.