How EU Digital Laws Are Being Weaponized to Control Speech During Campaign Season

As the Netherlands prepares for a critical parliamentary election on October 29, 2025, the country’s competition regulator is leveraging sweeping EU digital laws to pressure major social media platforms into aggressive content moderation—raising serious questions about who gets to decide what voters can see and discuss online.

Ireland’s Digital Surveillance State: How the Government Ignored Public Opposition to Build a Censorship Machine

The September Summons

The Dutch Authority for Consumers and Markets (ACM) has summoned a dozen major digital platforms, including X, Facebook, and TikTok, to a meeting on September 15. The goal is to pressure these companies into clamping down on whatever officials define as “disinformation” or “illegal hate content” before voters cast their ballots.

The session will also involve the European Commission, national regulators and civil society groups, reinforcing a growing trend where unelected bureaucrats, activist organizations, and corporate gatekeepers coordinate to shape public conversation online.

Election Context: A Government in Crisis

The vote was called early in June after the Dutch government collapsed over migration policy disputes. This political instability has set the stage for a contentious election where immigration, EU policy, and national sovereignty are likely to be hot-button issues—precisely the topics that often fall under regulators’ expansive definitions of “problematic content.”

The early election timing adds urgency to the ACM’s efforts, as officials rush to establish content moderation frameworks just weeks before voters head to the polls.

The Digital Services Act: A Censorship Tool Disguised as Safety

Central to the effort is the EU’s new online censorship law, the Digital Services Act (DSA), a law that hands governments broad authority to demand content removals based on vague and shifting definitions like “harmful” or “illegal”.

Under European rules, large online platforms must combat illegal hate speech, malicious foreign interference, and disinformation on their platforms. “Under the Digital Services Act (DSA), they must implement a transparent and diligent policy regarding the content of their platforms and take effective measures against illegal content. This is especially important during elections,” ACM director Manon Leijten stressed.

The law designates certain platforms as “Very Large Online Platforms” (VLOPs), subjecting them to enhanced obligations for content monitoring and removal. Platforms designated as Very Large Online Platforms must uphold transparent policies for moderating content and act decisively against illegal material.

Pre-Election Pressure Campaign

In July, the ACM contacted the platforms to outline their legal obligations, request details for their Trust and Safety teams and collect responses to a questionnaire on safeguarding public debate. Back in July, the ACM had already contacted these platforms and asked them to outline their strategies for ensuring what it calls “electoral integrity”.

The ACM noted that it had contacted the platforms on July 21 to outline their legal responsibilities, request contact details for their Trust and Safety teams, and seek responses to a questionnaire on measures to protect public debate.

The September meeting will evaluate how companies plan to tackle disinformation, foreign interference and illegal hate speech during the campaign period. The upcoming September meeting is designed to examine how these companies plan to enforce that goal, particularly regarding user-generated content during the campaign season.

The Definitional Problem

What is being marketed as a safeguard against threats to democracy is, in practice, a tightening of the boundaries around what people are allowed to say. While officials claim this is about transparency, the real issue remains unaddressed. Who exactly gets to determine what counts as dangerous or misleading?

The Dutch approach exemplifies a broader European trend where “disinformation” and “hate speech” have become catch-all categories that can be applied to virtually any content that challenges official narratives or mainstream political positions. The timing—just weeks before a contentious election—suggests these tools may be used to influence electoral outcomes by limiting what voters can see and discuss.

A System Ripe for Abuse

The regulator is also encouraging users to report directly if they believe platforms have mishandled flagged content, setting up a system that could easily be gamed to suppress dissent. This crowd-sourced censorship model creates perverse incentives where political activists can flood platforms with reports to silence opposing viewpoints.

The Dutch model follows Germany’s controversial NetzDG law, which has been criticized for leading to over-censorship as platforms err on the side of removing content rather than risk massive fines. Critics are concerned that the trend of increasing complaints and actions related to NetzDG taken by digital platforms correlates to self-censorship and hampering freedom of speech and expression.

International Precedents and Concerns

Recent events across Europe demonstrate how digital speech laws are being weaponized for political purposes. In recent years, there have been growing concerns about foreign interference in elections in Europe and elsewhere. For example, in December, the European Commission opened an investigation into TikTok for possible Russian interference in the Romanian elections.

The Romania case is particularly troubling, as the country’s Constitutional Court actually canceled the presidential election results, citing social media influence. This sets a dangerous precedent where election outcomes can be nullified based on claims about online content.

Platform Responses and Resistance

Missing from the list of 42 platforms — including those owned by Google, Meta and Microsoft — who committed to a strengthened code of conduct was tech billionaire Elon Musk’s social media platform X. Musk withdrew his platform — then known as Twitter — from the original code in May 2023 and he has repeatedly railed against the European Union’s content moderation rules.

Meta remains part of the code despite its CEO Mark Zuckerberg aligning himself with the new White House and slamming EU rules as “censorship” in January as he announced a halt to U.S. fact-checking operations for its Facebook and Instagram platforms.

The mixed responses from platforms highlight the growing tension between American tech companies and European regulators over fundamental questions of free speech and content moderation.

The ACM’s Expanding Digital Authority

ACM was recently authorized to enforce compliance with several new European laws, including the Platform-to-Business Regulation, the Digital Services Act, and the Digital Markets Act. This represents a massive expansion of regulatory power over digital communications.

Manon Leijten, Member of the Board of ACM, explains: “We are now officially authorized to enforce various digital laws. That means that we can truly work on a secure and fair digital economy. We will give priority to, among other topics, the protection of minors against online abuse and deception”.

The regulator’s language reveals the mission creep inherent in these laws—what starts as “protecting minors” quickly expands to controlling adult political discourse under the guise of fighting “disinformation.”

Broader Implications for Democratic Discourse

Enforcing the rules against the dissemination of disinformation and hate speech on social media has become a key priority for the ACM in 2025. However, the agency provides no clear criteria for distinguishing between legitimate political debate and prohibited “disinformation.”

This represents a fundamental shift in how democratic societies approach political speech. Rather than allowing open debate and trusting voters to evaluate competing claims, European regulators are increasingly positioning themselves as arbiters of truth—a role traditionally reserved for voters themselves.

The Timing Problem

The September 15 meeting, coming just six weeks before the election, raises serious questions about electoral manipulation. By pressuring platforms to increase content moderation during the most critical period of democratic debate, Dutch regulators may be tilting the playing field in favor of establishment voices while silencing dissenting perspectives.

This approach mirrors tactics used by authoritarian regimes that restrict information flow during election periods, albeit under the more palatable banner of fighting “disinformation.”

What’s at Stake

The Dutch election pressure campaign represents a test case for how far European regulators can go in controlling online political speech. If successful, this model will likely be exported to other EU countries facing elections, creating a continent-wide system of pre-election censorship.

EU regulators and civil society groups will be present at the session, reinforcing a growing trend where unelected bureaucrats, activist organizations, and corporate gatekeepers coordinate to shape public conversation online.

The involvement of “civil society groups”—often activist organizations with their own political agendas—in government censorship efforts highlights how the traditional boundaries between state and non-state actors are blurring in the digital age.

The Democratic Deficit

Perhaps most troubling is the complete absence of democratic oversight in this process. Unelected regulators and digital giants huddle to redraw the boundaries of acceptable speech weeks before the Netherlands votes. Neither voters nor their elected representatives have any meaningful input into what speech will be permitted during their own election.

This represents a fundamental inversion of democratic principles, where bureaucrats and corporate executives—not citizens—decide what information voters can access when making the most important political decisions in a democracy.

Conclusion: A New Form of Election Interference

The Dutch regulator’s pressure campaign against social media platforms represents a sophisticated form of election interference—one conducted not by foreign adversaries, but by domestic authorities wielding EU digital laws as weapons of political control.

By framing censorship as “election integrity” and “safety,” European regulators have found a way to restrict political speech that would be unthinkable through traditional legislative processes. The September 15 meeting in the Netherlands may well be remembered as a watershed moment when European democracy formally embraced the principle that unelected officials should control what citizens can say and hear during elections.

For those who believe in democratic self-governance, the implications are chilling. If regulators can effectively control electoral discourse through pressure on private platforms, the fundamental premise of democratic choice—that informed citizens should decide their own political future—becomes meaningless.

The Dutch model isn’t protecting democracy; it’s replacing it with a technocratic system where bureaucrats and big tech executives have more influence over electoral outcomes than voters themselves.