A collision between constitutional rights, child safety, and rapidly advancing AI technology
The Crisis at Our Digital Doorstep
On September 30, 2025, OpenAI released Sora 2, a groundbreaking AI model capable of generating photorealistic video with synchronized audio. Within hours, the technology was being tested for its ability to create convincing deepfakes of public figures. The same day, legal experts on social media were pointing to a Wisconsin federal court case that had quietly dropped a bombshell two months earlier: the Constitution may protect the private possession of AI-generated child sexual abuse material (CSAM).
These two developmentsāone technological, one legalārepresent a convergence that privacy advocates, child safety experts, and lawmakers have feared but struggled to address. The question is no longer hypothetical: What happens when AI can generate images and videos of child abuse that are indistinguishable from reality, and the law says you canāt prosecute people for keeping them private?
As one legal commentator put it bluntly on social media: āWe are not ready for this.ā
The Wisconsin Case: When Technology Outpaces Law
The Facts
In May 2024, federal prosecutors in Wisconsin charged Steven Anderegg with using Stable Diffusion, an AI image generation tool, to create obscene images of minors and distribute them to a teenager on Instagram. Critically, prosecutors made no allegation that the images depicted real childrenāthey were entirely AI-generated.
Anderegg faced four charges under federal law, including three counts under 18 U.S.C. § 1466Aāthe federal child obscenity statuteāfor production, distribution, and possession of AI-generated CSAM. He also faced one count of transferring obscene material to a minor.
The Ruling
In February 2025, the U.S. District Court for the Western District of Wisconsin dismissed the possession charge, holding that the First Amendment protects the private possession of obscene AI-generated CSAM. The court allowed the other three charges to proceed to trial, but the ruling on possession sent shockwaves through legal and child safety communities.
The courtās reasoning rested on three Supreme Court precedents:
Stanley v. Georgia (1969): The Supreme Court held that the First Amendment protects the right to possess obscene material in oneās own home, stating that the government cannot control a personās private thoughts.
Osborne v. Ohio (1990): The Court carved out an exception to Stanley for actual CSAM involving real children, because the government has a compelling interest in protecting children from exploitation during production.
Ashcroft v. Free Speech Coalition (2002): The Court struck down provisions of the Child Pornography Prevention Act that banned āvirtualā child pornographyāimages created without using real children. The Court held that without actual child victims in the production process, such images constitute protected speech under the First Amendment.
The Wisconsin court concluded that Andereggās case fell squarely within the Stanley-Ashcroft framework: AI-generated images with no real children depicted + private possession = constitutionally protected.
The Governmentās Failed Arguments
Federal prosecutors tried multiple approaches to distinguish this case, all of which the court rejected:
āThis is more like Osborne than Stanleyā ā The court disagreed, noting that without real children involved, the compelling government interest recognized in Osborne doesnāt apply.
āStanley should be limited to adult contentā ā The court found this inconsistent with Ashcroft, which already established that virtual CSAM is protected speech when no real children are involved.
āVirtual CSAM can be used to groom childrenā ā The court noted the government made this exact argument in Ashcroft and lost. The Supreme Court held that the mere tendency of speech to encourage unlawful acts isnāt sufficient reason for banning it.
āItās too difficult to distinguish virtual from real CSAMā ā Again, this argument failed in Ashcroft. The Court held that the government cannot ban protected speech merely because it resembles unprotected speech.
āThe interstate commerce hook distinguishes this from Stanleyā ā The court rejected this, warning that accepting such logic would render Stanley āa dead letter,ā as virtually all digital content involves interstate commerce in some way.
The Constitutional Collision
Why Ashcroft Matters Now
When the Supreme Court decided Ashcroft v. Free Speech Coalition in 2002, Justice Clarence Thomas wrote a concurring opinion that now reads like prophecy:
āIf technological advances thwart prosecution of āunlawful speech,ā the Government may well have a compelling interest in barring or otherwise regulating some narrow category of ālawful speechā in order to enforce effectively laws against pornography made through the abuse of real children.ā
In 2002, āvirtualā child pornography meant crude computer graphics or adults made to look like children through prosthetics or special effects. The technology was distinguishable from reality. Twenty-three years later, AI models like Stable Diffusion, Midjourney, and DALL-E can generate photorealistic images that even trained forensic experts struggle to authenticate.
Now, Justice Thomasās concurrence looks less like a hypothetical and more like an invitation to relitigate Ashcroft in the age of AI.
The Historic and Traditional Exception
Legal commentators have pointed to language in the Supreme Courtās First Amendment jurisprudence about āhistoric and traditional categoriesā of unprotected speech. As the Court stated in U.S. v. Stevens (2010):
āFrom 1791 to the present, certain historic and traditional categoriesāsuch as obscenity, defamation, fraud, incitement, and speech integral to criminal conductāhave been understood to fall outside the scope of the First Amendment.ā
Hereās the problem: CSAM is not actually part of those āhistoric and traditional categories.ā The Supreme Court didnāt recognize CSAM as categorically unprotected until 1982 in New York v. Ferberānearly 200 years after the First Amendment was ratified. And even then, the Courtās reasoning was entirely focused on protecting real children from the harms of production.
As one legal scholar noted on social media: āFerber obtains because it would be an abomination if it didnāt.ā In other words, the Court made child pornography an exception to First Amendment protection not because of historical tradition, but because the alternative was morally unthinkable.
With AI-generated CSAM, there are no real children harmed in production. The traditional Ferber rationale collapses.
The Broader Implications: Revenge Porn and Deepfakes
Some legal observers have pointed out that if the reasoning in Ashcroft protects AI-generated CSAM, it may also protect other forms of harmful AI-generated content. This connects to broader concerns about AIās role in facilitating human trafficking and exploitation, where synthetic media is weaponized for harm:
AI-generated revenge porn: If someone creates sexually explicit deepfakes of an adult without consent, is that protected speech under Ashcroft? Thereās no ārealā person depicted in the technical senseāthe sexual activity never occurred.
Defamatory deepfakes: If someone creates a convincing video of a public figure saying or doing something they never did, is that protected speech? Many state deepfake laws may face constitutional challenges.
Political disinformation: AI-generated videos of candidates making false statements or engaging in misconduct could proliferate without effective legal recourse.
The line between protected speech and harmful conduct becomes dangerously blurred when AI can create synthetic realities that are indistinguishable from truth.
The State Response: A Patchwork of Laws
Current Legal Landscape
As of August 2025, 45 states have enacted laws criminalizing AI-generated or computer-edited CSAM. Only five states and Washington, D.C. have not. Most of these laws were passed in 2024-2025, reflecting the urgency lawmakers feel about this issue. This legislative wave is part of broader efforts to strengthen child safety regulations in digital spaces.
However, many of these state laws face potential constitutional challenges under the Ashcroft precedent. Some states have drafted narrow statutes that focus on:
- Images that depict identifiable real minors (even if the sexual content is AI-generated)- Images used with intent to harass, intimidate, or harm specific victims- Distribution rather than possession
Wisconsin became the first state to specifically prohibit possession and creation of purely virtual, AI-generated CSAM. Yet it was a Wisconsin federal court that ruled such a prohibition unconstitutional as applied to private possession.
The Federal Approach
At the federal level, prosecutors have increasingly turned to 18 U.S.C. § 1466Aāthe child obscenity statuteāwhich explicitly does not require āthat the minor depicted actually exist.ā This allows prosecutors to avoid the difficult task of proving whether a photorealistic AI image depicts a real child.
However, as the Wisconsin case demonstrates, Section 1466A runs into the Stanley v. Georgia problem: the First Amendment protects private possession of obscenity (including obscene depictions of minors) as long as no real children are involved.
The statute still allows prosecution for:
- Production of AI-generated child obscenity- Distribution of AI-generated child obscenity- Transferring such material to minors
But the possession chargeāoften the easiest to prove and prosecuteāmay be off the table for purely AI-generated material.
The Scale of the Problem
The numbers tell a disturbing story about how rapidly this issue is escalating:
2024: The National Center for Missing & Exploited Children (NCMEC) received 67,000 reports of AI-generated CSAM to its CyberTipline.
First half of 2025: NCMEC received 485,000 reportsāa 624% increase year-over-year.
Projection: If the trend continues, 2025 could see over one million reports of AI-generated CSAM.
This represents a new dimension to an already serious problem. Traditional platforms like Snapchat have struggled with CSAM issues for years, but AI generation creates an entirely new category of synthetic abuse material.
Yet despite this flood of reports, there has been only one pending federal criminal case alleging creation of virtual child pornography using AI (the Anderegg case discussed above). This suggests that the legal framework is not keeping pace with the technological reality.
The Grooming Concern
Even if AI-generated CSAM doesnāt directly harm children in production, child safety advocates point to several indirect harms. As weāve explored in our guide on protecting minors online, the digital landscape presents evolving threats to childrenās safety:
Grooming tool: Predators can use AI-generated images to desensitize children to sexual content or normalize abuse.
Market normalization: Widespread availability of AI-generated CSAM may reduce the stigma around actual CSAM and create a pipeline to content involving real victims.
Detection interference: The flood of AI-generated material makes it harder for law enforcement to identify and prioritize cases involving real children.
Morphed images: AI tools can take innocent photos of real children and transform them into sexually explicit content, causing psychological harm to identifiable victims even though the sexual conduct is synthetic.
However, the Supreme Court rejected similar arguments in Ashcroft, holding that the government cannot suppress protected speech based on how others might misuse it.
The Technology Is Already Here
Sora 2 and the Deepfake Future
OpenAIās September 30, 2025 release of Sora 2 demonstrates just how quickly AI video generation is advancing. The model can now:
- Generate photorealistic video up to 20 seconds long- Create synchronized dialogue and sound effects- Follow the laws of physics with dramatically improved realism- Feature ācameosā where users can insert themselves or others into AI-generated scenes
Sora 2ās ācameoā feature is particularly concerning from a child safety perspective. While OpenAI requires consent and identity verification, the technology can be replicated by bad actors without such safeguards. Open-source alternatives to Sora, Stable Diffusion, and other models are widely available with minimal or no safety restrictions.
Within hours of Sora 2ās release, users were creating deepfakes of public figures and testing the limits of the platformās content moderation. Early reports showed the system could generate realistic videos of public figures in compromising situations, despite OpenAIās stated prohibitions.
As one cybersecurity expert noted: āThe tools to create synthetic CSAM are now so accessible that anyone with a laptop and an internet connection can do it.ā
The Attribution Problem
One additional challenge: How do we prove an AI-generated image depicts a real child?
With traditional CSAM, law enforcement can sometimes identify victims and build cases around rescuing them from ongoing abuse. With AI-generated content, there may be no victim to identify. Prosecutors must prove beyond a reasonable doubt that:
- The image is of a real person (not purely synthetic)2. That person is identifiable3. That person was a minor at the time of the depiction
As AI image quality continues to improve, this burden becomes nearly impossible to meet. The technology is advancing faster than forensic capabilities to analyze it.
What Can Be Done?
Legal Reforms
Several approaches are being debated, building on existing frameworks like COPPA (Childrenās Online Privacy Protection Act) and proposed legislation such as KOSA (Kids Online Safety Act). Specific approaches to AI-generated CSAM include:
1. Narrow statutory definitions: Focus laws on AI-generated content that depicts identifiable real minors, even if the sexual conduct is synthetic. This avoids the Ashcroft problem by tying the prohibition to a real victim.
2. Distribution-focused statutes: Since Stanley protects only private possession, focus enforcement on production, distribution, and sharing rather than mere possession.
3. Supreme Court review: The government has appealed the Wisconsin district court decision to the Seventh Circuit Court of Appeals. This will be the first federal appellate case involving AI-generated CSAM and First Amendment issues. Ultimately, the Supreme Court may need to revisit Ashcroft in light of modern AI capabilities.
4. Grooming enhancements: Strengthen laws against using AI-generated material in the course of grooming or soliciting minors, even if mere possession remains protected.
5. International coordination: Since AI-generated CSAM crosses borders effortlessly, domestic laws alone are insufficient. International treaties and coordinated enforcement will be necessary. Multiple child safety bills beyond COPPA and KOSA are being considered at federal and state levels to address these emerging threats.
Platform Responsibility
Tech platforms and AI companies must implement robust safeguards. Major platforms have taken varying approaches to CSAM detectionāAppleās controversial on-device scanning proposal illustrates the tensions between privacy and child safety. For AI-generated content specifically, companies must implement:
Content moderation at scale: Platforms need AI-powered detection systems that can identify and remove AI-generated CSAM as quickly as itās created.
Model restrictions: AI model developers should implement safeguards that prevent their systems from generating child sexual abuse content, even if such restrictions can be circumvented by determined bad actors.
Reporting mechanisms: Platforms should have clear channels for reporting suspected AI-generated CSAM to NCMEC and law enforcement. Some platforms like TikTok have implemented differentiated experiences for minors to enhance safety, and similar age-appropriate safeguards are needed for AI generation tools.
Provenance and watermarking: AI-generated content should include embedded metadata and visible watermarks to aid in detection and attribution.
The Privacy Tension
Privacy advocates face a difficult tension: while protecting privacy and free expression is paramount, these same protections can shield deeply harmful conduct.
Encryption: End-to-end encryption protects user privacy but makes it nearly impossible for platforms to detect AI-generated CSAM in private messages.
Anonymity: Anonymous platforms protect whistleblowers and dissidents but also provide cover for predators.
Local AI models: Open-source AI models that run entirely on a userās device cannot be monitored or restricted by platforms or governments.
There are no easy answers. Any solution that makes it easier to detect AI-generated CSAM will also expand surveillance capabilities that could be abused. The challenge is finding a balance that protects children without creating a surveillance state.
The Hard Questions Ahead
This issue forces us to confront uncomfortable questions about the limits of constitutional protections in an age of synthetic media:
If the First Amendment protects private possession of AI-generated CSAM that depicts no real child, should it? The Constitution is not a suicide pact, but changing it requires more than moral urgency.
If we allow the government to ban AI-generated CSAM despite Ashcroft, what other synthetic content can the government ban? Political deepfakes? Defamatory images? Unflattering portraits? Where is the line?
Should Stanley v. Georgia still apply in the digital age? Stanley was decided in 1969, when possessing obscene material meant buying physical magazines and keeping them in your home. In 2025, āpossessionā can mean cloud storage, temporary cache files, or AI models that can regenerate content on demand. Is the rationale for Stanley still valid?
Can we distinguish between creation and possession when AI generates images in seconds? Traditional law treats production and possession as separate acts. With AI, they merge into a single interaction with a model. How should the law adapt?
What duty do AI companies have to prevent misuse of their models? OpenAI, Stability AI, and others have released powerful tools into the world. To what extent are they responsible for what users create with them?
š§ Related Podcast Episode
Conclusion: A Call to Action
The Wisconsin court ruling was correct as a matter of current constitutional law. The judge faithfully applied Supreme Court precedent to the facts of the case. But that doesnāt mean the outcome is just, or that the law should remain unchanged.
Twenty-three years ago, the Supreme Court could not have imagined AI systems that can generate photorealistic images and videos of child abuse on demand. The technology has fundamentally changed the calculus. We need new legal frameworks that can protect children without abandoning constitutional principles.
This will require:
- Legislative creativity to draft statutes that are both effective and constitutional- Judicial flexibility to recognize when old precedents no longer fit new realities- Technological innovation to build detection and prevention tools- International cooperation to address a problem that knows no borders- Civil society engagement to keep pressure on all actors to prioritize child safety
As weāve emphasized in our comprehensive guide on protecting children online, safeguarding minors in the digital age requires a multi-stakeholder approach that balances innovation, privacy, and safety.
As Justice Thomas warned in 2002: if technology thwarts prosecution of unlawful speech, the government may have a compelling interest in regulating some narrow category of lawful speech. That time may have arrived.
The question is whether we have the wisdom, the courage, and the political will to update our laws and norms for an age where the unreal can be indistinguishable from realityāand where the harm, though mediated by algorithms rather than cameras, is no less real to the children whose likenesses are exploited, and to the society that must grapple with the consequences.
We are not ready for this. But ready or not, itās here.
Additional Resources
- Riana Pfefferkornās analysis: Court Rules That Constitution Protects Private Possession of AI-Generated CSAM (TechPolicy.Press, March 20, 2025)- NCMEC CyberTipline: Report suspected child exploitation at CyberTipline.org- State law tracker: Enough Abuse state-by-state analysis- **Supreme Court cases:**Stanley v. Georgia, 394 U.S. 557 (1969)- New York v. Ferber, 458 U.S. 747 (1982)- Osborne v. Ohio, 495 U.S. 103 (1990)- Ashcroft v. Free Speech Coalition, 535 U.S. 234 (2002)
This article is for informational purposes only and does not constitute legal advice. Laws regarding AI-generated CSAM vary by jurisdiction and are rapidly evolving. If you become aware of suspected child exploitation, report it immediately to NCMECās CyberTipline and local law enforcement.