When the EU’s General Data Protection Regulation took full effect in May 2018, it was widely described — with only slight exaggeration — as the most significant privacy law ever enacted. It applied to any company processing the personal data of EU residents regardless of where that company was headquartered. It gave individuals the right to access, correct, and delete their data. It required meaningful consent before data collection. It created a supervisory architecture with real enforcement teeth: fines of up to 4% of global annual revenue.
The fines came. Meta was hit with €1.2 billion for transferring European user data to the US without adequate safeguards. Google received €50 million from French regulators in GDPR’s first year. Amazon, WhatsApp, LinkedIn, TikTok, and dozens of others followed. European regulators pursued cases that US enforcers wouldn’t touch. The law worked — imperfectly, slowly, with maddening inconsistencies across member states, but it worked in ways that mattered.
Now the European Commission is proposing to rewrite it. The vehicle is called the Digital Omnibus — officially Omnibus IV — published May 21, 2025. It is framed as an efficiency measure, a way to reduce compliance costs for small and medium businesses and align the tangle of EU digital laws that have accumulated since GDPR: the AI Act, the Data Act, the ePrivacy Directive. The framing is reasonable. The details are not.
What the Digital Omnibus Actually Changes
The personal data definition. GDPR’s power rests on a broad definition of personal data: any information relating to an identified or identifiable natural person. The Omnibus introduces a new recital that effectively allows companies to self-assess whether their data processing actually involves personal data — what critics have described as allowing companies to “mark their own homework.” The practical consequence: companies could argue that data used for AI training has been sufficiently processed to no longer count as personal data, removing it from GDPR’s scope entirely.
This was the most contested change. In February 2026, the Council of the European Union — through a draft circulated by the Cypriot presidency — eliminated the proposed new definition of personal data from the package. That was a genuine victory for privacy advocates. The rest of the Omnibus continues.
The AI training exemption. The Omnibus requires companies to remove personal data from AI systems — but only when doing so does not require “disproportionate efforts.” The phrase is not defined. No criteria are established for what counts as disproportionate. No independent authority is tasked with making that determination. In practice, any company that wants to keep personal data in a trained model can argue that retraining without it would be expensive, technically complex, or operationally disruptive — all of which are true for large models and all of which describe “disproportionate effort” loosely enough to swallow the rule.
EDRi, the European Digital Rights network, called this change “a carveout that could undermine the core purposes of the GDPR — to protect people from the harm caused by the mass collection and analysis of their personal information.” The current drafting effectively creates a one-way valve: personal data flows into AI training under GDPR’s existing rules, and then the Omnibus makes it nearly impossible to get it out.
ePrivacy weakening. The Omnibus shifts the key device-access provision of the ePrivacy Directive into GDPR and introduces broad exceptions allowing businesses to access data on users’ devices without requesting consent. The ePrivacy Directive, despite its age and imperfect implementation, is the legal basis for requiring cookie consent banners — annoying as they are, those banners exist because the underlying rule says companies cannot read your device without permission. The Omnibus exceptions significantly narrow when that permission is required.
Automated decision-making. GDPR Article 22 restricts automated decision-making that produces legal or similarly significant effects on individuals — the provision that limits algorithmic hiring rejections, credit denials, and insurance decisions made without human review. The Omnibus loosens these restrictions in ways that, critics argue, will allow discriminatory automated systems to operate with fewer safeguards.
How Big Tech Shaped the Package
This didn’t happen by accident. Corporate Europe Observatory, a research organization that tracks corporate lobbying of EU institutions, published a detailed reconstruction of how the Digital Omnibus’s specific provisions map to requests made by industry groups and individual technology companies during the consultation process. The analysis is not subtle: change by change, the Omnibus adopts language that industry lobbyists proposed and that privacy advocates opposed.
The AI training exemption, in particular, traces directly to lobbying pressure from companies with large AI training pipelines who argued that strict GDPR application to training data would put European AI development at a competitive disadvantage relative to the US and China. That argument — the competitiveness frame — has become the standard mechanism for weakening digital rights rules in Brussels. When a law creates friction for technology companies, the response is not to accept the friction as the cost of compliance, but to reopen the law and remove the friction.
What makes the Digital Omnibus notable is that the same companies that spent years fighting GDPR enforcement — the same ones that paid the billion-euro fines — are now inside the process that rewrites the rules that generated those fines.
The Timing Is Not a Coincidence
The Digital Omnibus is moving through the EU legislative process simultaneously with the most significant AI Act compliance deadline to date. On August 2, 2026, the requirements for Annex III high-risk AI systems become enforceable. That covers AI used in employment decisions, credit assessments, educational access, and law enforcement — the categories where algorithmic discrimination has been most thoroughly documented.
A company building an AI hiring system today faces two converging regulatory pressures: GDPR’s restrictions on automated decision-making under Article 22, and the AI Act’s forthcoming requirements for transparency, human oversight, and risk management. The Omnibus’s loosening of Article 22 constraints arrives precisely as the AI Act deadline approaches, reducing the combined compliance burden at the moment when it would otherwise be most acute.
Whether this timing is coordinated or coincidental is a question for EU legislative historians. The practical effect is the same: the new AI compliance requirement lands in an environment where one of the existing guardrails has been weakened.
The Irony at the Center of All of This
The EU spent fifteen years building a privacy regulatory architecture that became the global standard. GDPR influenced Brazil’s LGPD, Japan’s amended APPI, South Korea’s PIPA, California’s CCPA, and dozens of other laws. When countries with no prior history of strong data protection wanted a template, they took GDPR. When US companies operating globally wanted a single compliance framework to standardize against, they took GDPR.
The companies that spent those fifteen years litigating GDPR enforcement, arguing jurisdictional questions, challenging adequacy decisions, and lobbying against strict DPA interpretations are now the primary architects of its replacement. The lobbying records are public. The outcome is visible in the text.
Amnesty International put it plainly in an April 2026 analysis: the Omnibus “will roll back our rights” and represents the EU abandoning the regulatory leadership position it spent a generation building. Tech Policy Press framed it differently: “EU Set the Global Standard on Privacy and AI. Now It’s Pulling Back.”
Where the Fight Stands
The Council’s February 2026 rejection of the proposed personal data definition change is the most meaningful resistance to date. It demonstrates that member states are not uniformly accepting the Commission’s framing — that “simplification” is a neutral or positive description of what the Omnibus does.
The package continues through the ordinary legislative procedure. The European Parliament, which has historically taken stronger privacy positions than the Council or Commission, will have the opportunity to amend it. Civil society organizations — EDRi, Amnesty, Access Now, Privacy International — are coordinating advocacy to restore deleted protections and close the AI training loophole.
The outcome is not determined. But the baseline has shifted. Whatever version of the Omnibus emerges will be weaker than GDPR as it currently exists. The question is how much weaker — and whether the protections that made GDPR worth having survive the process.
For anyone building compliance programs, advising clients on EU data protection, or simply trying to understand whether the rights GDPR promised still exist: watch the Parliament’s amendments closely. The next few months will determine whether the EU’s digital rights regime is reformed or substantially dismantled. Both outcomes are currently plausible.
Resources for EU Privacy and Compliance
- GDPR, AI Act, and global privacy law reference guides: ComplianceHub.wiki
- Privacy assessment and risk tools: MyPrivacy.blog
- Corporate data breach tracking: Breached.company
For organizations building EU AI Act compliance programs or navigating GDPR in the Omnibus transition period, CISO Marketplace provides regulatory gap assessments and vCISO advisory services.
Sources: European Commission Digital Omnibus IV proposal, May 21 2025; EDRi analysis; Amnesty International, April 2026; Corporate Europe Observatory lobbying reconstruction; IAPP Digital Omnibus analysis; Tech Policy Press; Kennedys Law; EU Council leaked compromise draft, February 2026 (Cypriot presidency); EU AI Act compliance timeline via Legal Nodes.



