The AI revolution has a dark underbelly. Deepfake ânudifyâ technology is now sophisticated enough to generate explicit videos from a single photo, and the infrastructure supporting this abuse has evolved into a multi-million dollar industry targeting women and girls.
BREAKING: European Commission Opens Investigation Into X Over Grok Deepfakes
January 26, 2026 â The European Commission has launched a formal investigation into X under the Digital Services Act (DSA) following a global backlash over Grokâs AI image generation capabilities enabling the creation of nonconsensual sexualized deepfakes.
The investigation comes after Elon Muskâs AI chatbot Grok allowed users to âundressâ people in photos, placing women and girls in transparent bikinis or revealing clothing. Researchers reported that some generated images appeared to include children.
âNon-consensual sexual deepfakes of women and children are a violent, unacceptable form of degradation. With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens â including those of women and children â as collateral damage of its service.â
â Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy
The Commission is examining whether X properly assessed and mitigated risks associated with Grokâs deployment, including risks related to disseminating illegal content such as manipulated sexually explicit images and potential child sexual abuse material. For a deeper dive into how the DSA works and its enforcement mechanisms, see our comprehensive guide: The EUâs Digital Services Act: A New Era of Online Regulation.
Key enforcement actions and responses:
- The EU fined X âŹ120 million ($140 million) in December under a separate DSA investigation- The Commission has extended its ongoing investigation into Xâs recommender systems, including its transition to Grok-based recommendations- Malaysia and Indonesia temporarily blocked Grok earlier this month (Malaysia has since lifted restrictions after xAI implemented additional safeguards)- The UKâs Ofcom has opened a parallel investigation under the Online Safety Act- xAI announced on January 14 it would stop allowing Grok to undress images of real people and implement geoblocking in jurisdictions where such content is illegal
If X is found in breach of the DSA, the platform could face fines of up to 6% of its total global annual revenue. Grok itself has acknowledged âlapses in safeguardsâ against producing such imagery.
The Evolving Threat Landscape
What started as a fringe technology requiring significant technical expertise in 2017 has transformed into an industrialized abuse ecosystem. Todayâs deepfake services donât just offer crude image manipulationâthey provide menus of explicit scenarios, video templates depicting graphic sexual situations, and AI-generated audio, all generated from a single uploaded photograph.
The barrier to entry has collapsed entirely. A review of more than 50 deepfake websites reveals that nearly all now offer high-quality video generation with dozens of customizable sexual scenarios. Services advertise features like âsex-mode,â custom poses, age manipulation, and even pregnancy simulation. One service offers 65 explicit video âtemplatesâ for fees measured in dollars, not thousands.
This isnât theoretical harm. Deepfake files have surged from approximately 500,000 in 2023 to a projected 8 million in 2025âa sixteen-fold increase in just two years.
The Take It Down Act: Americaâs First Federal Law Against Deepfakes and Revenge Porn
Rep. Nancy Maceâs RESPECT Act: Strengthening Federal Response to Deepfakes and Revenge Porn
The Numbers Paint a Disturbing Picture
The statistics reveal both the scale of the problem and its gendered nature:
Victimization Patterns:
- 96-98% of all deepfake content online consists of non-consensual intimate imagery (NCII)- 99-100% of victims in deepfake pornography are female- 1 in 3 respondents in a survey across Australia, New Zealand, and the UK reported being victims of some form of image-based sexual abuse- 2.2% of people surveyed across 10 countries reported being victims of deepfake pornography specifically
Acceleration of Harm:
- Deepfake incidents increased 257% to 150 cases in 2024- Q1 2025 alone saw 179 incidentsâsurpassing all of 2024 by 19%- North America experienced a 1,740% increase in deepfake fraud- The Internet Watch Foundation documented AI-generated deepfakes of child sexual abuse increasing 400% in the first half of 2025 compared to 2024
The School Crisis: The Center for Democracy and Technology reports that NCIIâboth authentic and deepfakeâhas become a significant issue in K-12 schools. Female and LGBTQ+ students are disproportionately depicted in deepfake content, yet only 5% of schools provide victims with resources to help remove images from platforms. Instead, schools have focused primarily on punishing perpetrators while leaving victims without support. For organizations navigating the complex landscape of childrenâs online protections, see our guides on KOSA: Protecting Our Children in the Digital Age and Beyond COPPA: The Surprising Legal Maze of U.S. Childrenâs Data Privacy.
The Infrastructure of Abuse
The deepfake ecosystem has matured into sophisticated infrastructure. Larger services now offer APIs to smaller operators, enabling a mushrooming of nonconsensual image generators built on shared technology. A review found more than 1.4 million accounts signed up to just 39 deepfake creation bots and channels on Telegramâand thatâs before counting the dozens of dedicated websites.
The technology stack enabling this abuse relies heavily on open-source models. As one MIT researcher noted, these services often consist of an open-source model adapted into an app that users access directly. The democratization of AI has inadvertently democratized abuse.
Voice cloning has added another dimension to the threat. Attackers now need as little as three seconds of audio to create a clone with an 85% voice match. This source material can be scraped from social media posts, podcasts, or YouTube videos. Combined with video deepfakes, this enables highly convincing impersonation attacks.
Beyond Sexual Abuse: The Business Threat
Organizations face their own deepfake crisis. In February 2024, a finance worker at engineering firm Arup was deceived into wiring $25 million after fraudsters used deepfake video conferencing to impersonate the companyâs CFO and multiple colleagues. Every person on that call was AI-generated.
The financial impact continues to escalate:
- Average deepfake-related incident cost to businesses in 2024: nearly $500,000- Large enterprises experienced losses up to $680,000 per incident- CEO fraud now targets at least 400 companies per day using deepfakes- Fraud losses from generative AI are expected to rise from $12.3 billion in 2024 to $40 billion by 2027
Despite these numbers, roughly 25% of company leaders have little or no familiarity with deepfake technology, and 80% of companies lack protocols to handle deepfake attacks. More than half of business leaders report their employees have received no training on identifying deepfake fraud attempts.
The Psychology of Perpetrators
Research interviewing deepfake creators reveals disturbing motivations. An Australian study identified four primary drivers: sextortion, intentional harm to others, peer bonding and reinforcement, and simple curiosity about the technologyâs capabilities.
Many communities developing these tools exhibit what researchers describe as a âcavalierâ attitude toward the harm caused. One perpetrator told researchers: âYou just want to see whatâs possible. Then you have a little godlike buzz of seeing that youâre capable of creating something like that.â
This normalization of abuse compounds the harm. Unlike public deepfakes that can spread virally, much of this content is shared privately with victims or their friends and family, maximizing psychological damage while minimizing detection.
Legal Responses: The TAKE IT DOWN Act and Beyond
The legal landscape shifted significantly in May 2025 when President Trump signed the TAKE IT DOWN Act into lawâthe first U.S. federal law directly targeting deepfake abuse. The legislation:
- Criminalizes knowingly publishing nonconsensual intimate imagery, including AI-generated content- Requires platforms to remove flagged content within 48 hours- Establishes penalties including fines and up to two years imprisonment (three years for content involving minors)- Mandates that platforms implement notice-and-takedown systems by May 2026
The bipartisan bill passed the House 409-2 after unanimous Senate passage, reflecting rare cross-aisle consensus on the need for action. First Lady Melania Trump advocated for the legislation alongside teenage victims who had been targeted by deepfakes.
State-level responses vary significantly:
- California enacted both civil and criminal penalties for deepfake NCII- New Jerseyâs law, inspired by student victim Francesca Mani, treats creation or sharing of deepfakes as a third-degree crime with up to $30,000 in fines- Tennesseeâs ELVIS Act explicitly protects an individualâs voice as personal property- 30 states now have laws directly addressing deepfake NCII
For additional context on the evolving landscape of online child protection legislation, including COPPA, KOSA, and the STOP CSAM Act, see: Child Safety Bills: Beyond COPPA and KOSA.
Internationally, the EU AI Act mandates transparency for AI-generated content and has outlawed worst-case AI-based identity manipulation. The UK Online Safety Act designates deepfake pornography as a priority offense with enforcement beginning in 2025. The Digital Services Act (DSA) now provides the EU with powerful enforcement toolsâas demonstrated by the January 2026 investigation into X/Grok, which could result in fines of up to 6% of global annual revenue for platforms that fail to assess and mitigate risks from AI-generated sexual imagery. For compliance teams navigating both frameworks, see: UK Online Safety Act and EU Digital Services Act Cross-Border Impact Analysis.
However, experts caution that legislation alone is insufficient. Without detection technology to identify synthetic content, laws lack teeth. As one researcher put it: âHow would speed limits help if there were no radar guns or police to enforce them?â
The Alarming Use of AI in Creating Child Pornography: A Legal and Ethical Challenge
The Detection Arms Race
Detection technology faces significant challenges keeping pace with generation capabilities. While multiple commercial tools now offer deepfake detectionâincluding Sensity, Reality Defender, Hive AI, and Intelâs FakeCatcherârecent studies suggest detection models struggle with newer or adversarial deepfakes.
Detection Capabilities:
- Some tools claim 98% accuracy on public datasets- Audio deepfake detectors achieve approximately 88.9% accuracy in controlled settings- Multi-layered systems analyze facial inconsistencies, biometric patterns, metadata, and behavioral anomalies
Detection Limitations:
- Performance degrades significantly against adversarial inputs- A 2025 iProov study found only 0.1% of participants correctly identified all fake and real media shown- 68% of deepfakes are now ânearly indistinguishable from genuine mediaâ- Standard audio deepfake detectors lost up to 43% performance when exposed to realistic inputs
YouTube has expanded its âlikeness detectionâ tool to help creators identify unauthorized use of their face in deepfake videos. However, the tool requires uploading government ID and biometric video dataâraising privacy concerns about whether that sensitive data could be used for AI training.
The fundamental problem: deepfake creation technology advances faster than detection capabilities, creating what researchers call a âvulnerability gapâ that criminals exploit.
Protecting Yourself and Your Organization
For a comprehensive guide to personal privacy protection in the AI era, including detailed defensive strategies, see our companion article: Navigating the Deepfake Dilemma: Protecting Your Privacy in the AI Era.
For Individuals:
- Audit your digital footprint - Minimize publicly available photos and video, particularly high-resolution images2. Monitor for misuse - Services like Google reverse image search can help identify if your images are being misused3. Know your rights - The TAKE IT DOWN Act gives you federal recourse; know your stateâs laws as well4. Report immediately - Platforms must now remove flagged content within 48 hours under federal law5. Document everything - Preserve evidence before attempting removal
For Organizations:
- Implement multi-factor authentication - Voice or video alone should never authorize high-value transactions2. Establish verification protocols - Require callback verification through known numbers for sensitive requests3. Train employees - More than half of companies havenât trained staff on deepfake recognition4. Adopt detection tools - Consider commercial deepfake detection for identity verification workflows5. Create incident response plans - 80% of companies lack protocols for deepfake attacks6. Verify unexpected requests - Especially those involving urgency, secrecy, or financial transactions
For Security Professionals:
- Integrate deepfake awareness into security training - Include examples in phishing simulations2. Review identity verification processes - Assess vulnerability to face-swap and voice clone attacks3. Monitor dark web discussions - Analysts report increased chatter about using deepfakes to bypass identity verification4. Consider liveness detection - Technology that identifies markers indicating whether content is generated by a living human or AI5. Layer security controls - No single control defeats sophisticated deepfake attacks
The Path Forward
The deepfake threat exists at the intersection of multiple societal challenges: the democratization of powerful AI, persistent gender-based violence, inadequate platform accountability, and the difficulty of regulating rapidly evolving technology.
New laws like the TAKE IT DOWN Act represent meaningful progress, but enforcement will prove challenging. Detection technology must improve rapidly to provide the forensic capabilities law enforcement needs. Platforms must invest genuinely in content moderation rather than relying on minimal-effort compliance.
Most fundamentally, society must confront the normalization of digital sexual abuse. The communities creating these tools often demonstrate shocking indifference to the harm they enable. Until that cultural attitude shiftsâwhether through legal consequences, platform deplatforming, or social pressureâthe technology will continue to find willing operators.
For organizations, the calculus is clearer: deepfake-enabled fraud represents a material risk requiring the same attention as any other significant threat vector. The $25 million Arup loss demonstrates that sophisticated attackers can defeat traditional verification through convincing synthetic media.
For individuals, particularly women and girls, the threat is more personal and more difficult to mitigate. The existence of these tools means anyone with sufficient public presence faces potential victimization. The best defensesâminimizing oneâs digital footprintâconflict directly with modern professional and social life.
The AI revolution has delivered extraordinary capabilities. It has also industrialized abuse at unprecedented scale. Confronting that reality requires legal frameworks, technological countermeasures, and cultural change working in concert. The alternative is a digital environment where anyoneâs likeness can be weaponizedâand trust in visual media erodes entirely.
Have questions about deepfake threats to your organization? Concerned about protecting yourself or your employees from synthetic media attacks? Contact us for a security assessment or training consultation.
References:
- European Commission Press Release: Investigation into X and Grok under DSA (January 26, 2026)- WIRED: âDeepfake âNudifyâ Technology Is Getting Darkerâand More Dangerousâ- U.S. TAKE IT DOWN Act (Signed May 19, 2025)- Center for Democracy and Technology: âIn Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K-12 Schoolsâ- Internet Watch Foundation AI-CSAM Statistics 2025- Sensity/Deeptrace Research- McAfee AI Voice Scam Study 2024- Deloitte Center for Financial Services 2024 Risk Forecast