Bottom Line Up Front: Millions of people are turning to AI chatbots for therapy and emotional support, but these conversations lack the legal protections that human therapy provides. When you open up to ChatGPT about your deepest struggles, that conversation can be subpoenaed, stored indefinitely, and used against you in court. This represents a fundamental moral failure of design that demands immediate action.
The Uncomfortable Truth Sam Altman Just Revealed
OpenAI CEO Sam Altman recently made a startling admission on Theo Vonâs podcast that should give every ChatGPT user pause: âPeople talk about the most personal sh** in their lives to ChatGPT. People use it â young people, especially, use it â as a therapist, a life coach; having these relationship problems and [asking] âwhat should I do?â And right now, if you talk to a therapist or a lawyer or a doctor about those problems, thereâs legal privilege for it. Thereâs doctor-patient confidentiality, thereâs legal confidentiality, whatever. And we havenât figured that out yet for when you talk to ChatGPT.â
The implications are staggering. While your conversations with a licensed therapist enjoy robust legal protections, OpenAI would be legally required to produce those conversations today if subpoenaed. Every intimate detail youâve shared, every vulnerable moment youâve confided, every crisis youâve worked through with an AIâall of it can be dragged into legal proceedings without your consent.
Comparing Leading AI Tools: Googleâs Gemini, OpenAIâs GPT-4, and TextCortex
The Scale of the Problem
This isnât a theoretical concern affecting a handful of users. ChatGPT has over 180 million users and 600 million monthly visits as of early 2025, and organizations that operate mental health chatbots say their users collectively would total in the tens of millions. People arenât just asking ChatGPT for homework helpâtheyâre bearing their souls.
The trust people place in these systems is both touching and terrifying. As one user described: âI felt like it had answered considerably more questions than I had really ever been able to get in therapy. Some things are easier to share with a computer program than with a therapist. People are people, and theyâll judge us, you know?â
This false sense of security is exactly what makes the current situation so dangerous.
What Real Therapist Confidentiality Looks Like
When you walk into a therapistâs office, youâre protected by centuries of established legal and ethical frameworks. Maintaining confidentiality is essential for building trust with patients and creating a safe space for therapy sessions, and these protections are backed by law.
Human therapists must:
- Prevent disclosure of confidential information in court proceedings through legal privilege- Follow strict HIPAA regulations for health information- Maintain professional liability insurance- Adhere to state licensing requirements and ethical codes- Only break confidentiality in very limited circumstances like imminent danger
The few exceptions to therapist confidentiality include:
- Imminent threat of harm to self or others- Suspected child abuse or neglect- Court-ordered evaluations where the patient waives privilege
Even then, therapists must carefully balance their duty to protect with their obligation to maintain trust.
The AI Confidentiality Vacuum
AI chatbots exist in a legal no-manâs land that strips users of these fundamental protections:
Privacy in the Age of AI: Exploring the Impact and Protective Measures
No Legal Privilege
Current US law does not consider chatbots as mental health providers, nor as medical devices; therefore, conversations are not considered confidential. This means your most private moments with AI have zero legal protection.
Data Retention Nightmares
Every query, instruction, or conversation with ChatGPT is stored indefinitely unless deleted by the user. But even deletion doesnât mean safetyâa judgeâs order on May 13, 2025, requires every intimate conversation, every business strategy session, every late-night anxiety spiral youâve shared with ChatGPT to be preserved for potential legal reviewâŚwhether you deleted it or not.
Corporate Data Mining
OpenAIâs primary use of user data centers on training and refining its AI models, including GPT-4, GPT-4o, and the upcoming GPT-5. Your therapy session becomes training data for the next AI model, analyzed by human reviewers and fed into algorithmic systems.
Third-Party Access
Model-as-a-service companies may, through their APIs, infer a range of business data from the companies using its models, such as their scale and precise growth trajectories. The potential for data breaches or unauthorized access multiplies across vendors and affiliates.
Real-World Consequences
The privacy crisis isnât theoreticalâitâs already causing real harm:
Legal Vulnerability: The court order affects users of ChatGPT Free, Plus, Pro, and Team, as well as standard API customers. Millions of users now face the possibility of their most private conversations being scrutinized in copyright litigation.
Professional Risk: Lawyers, doctors, and other professionals whoâve used AI for sensitive work discussions may face ethical violations and malpractice exposure.
Personal Safety: Survivors of abuse, people in custody disputes, or anyone in vulnerable situations could see their AI therapy sessions weaponized against them.
The Broken Promise of âAnonymousâ AI Therapy
The marketing promises donât match reality. Mental health AI apps consistently advertise themselves as offering âanonymousâ âself-helpâ therapeutic tools that are available 24/7, but chatbots often neglect patient privacy and confidentiality, especially on social media platforms where conversations are not anonymous.
This deception is particularly harmful because users begin to form digital therapeutic alliances with these chatbots, increasing their trust and disclosure of personal information. The more the AI seems to care, the more people shareâand the more they expose themselves to potential harm.
Silicon Valleyâs Dark Mirror: How ChatGPT Is Fueling a Mental Health Crisis
Why This Is a Moral Failure of Design
If something walks like a therapist and talks like a therapist, it should be held to therapist standards. The current situation represents what can only be called a moral failure of designâcreating systems that encourage the most vulnerable people to share their deepest secrets while providing none of the protections that make such sharing safe.
The fundamental problem: These AI systems have no knowledge of what they donât know, so they canât communicate uncertainty. In the context of therapy, that can be extremely problematic. They project confidence and competence while operating in a regulatory vacuum.
The ethical imperative: If a system âactsâ like a therapist, it should be held to therapist standards. That means privacy by default. That means protection by law. That means responsibility by design.
The Road Forward: Building Real AI Privilege
Sam Altman himself has called for âAI privilegeââarguing that conversations with an AI should be as confidential as those with a doctor or a lawyer, a principle he says the company will fight for. But corporate promises arenât enough. We need systemic change.
Immediate Regulatory Action Needed
Establish AI-Patient Privilege: Congress must create legal protections equivalent to therapist-patient privilege for AI systems that provide mental health support.
Mandatory Privacy by Design: Model-as-a-service companies that fail to abide by their privacy commitments to their users and customers, may be liable under the laws enforced by the FTC. This enforcement must be strengthened and expanded.
Clear Disclosure Requirements: Users must be explicitly warned when AI conversations lack confidentiality protections, with prominent, unavoidable warnings before sensitive discussions.
ByteDance plus Tiktok Plus OpenAi LLM
Industry Accountability Measures
Professional Standards: AI therapy providers should be required to meet licensing, insurance, and ethical standards similar to human therapists.
Data Minimization: Organizations must go beyond compliance by aligning with emerging frameworks to ensure accountability, implementing data minimization and explicit consent for any use of therapeutic conversations.
Crisis Response Protocols: Basic guardrails, including referring users in crisis to the national 988 Suicide and Crisis Lifeline, must be mandatory for all AI systems used for mental health support.
The Stakes Couldnât Be Higher
Weâre at a crossroads. An estimated 6.2 million people with a mental illness in 2023 wanted but didnât receive treatment, and AI could help bridge that gap. But only if we build it right.
The intersection of AI and privacy is no longer a mere regulatory requirement; it has evolved into an organizationâs strategic imperative. For mental health AI, that imperative becomes a moral imperativeâwe cannot let efficiency override empathy, or innovation override basic human dignity.
DeepSeek AI Under EU Scrutiny: Data Privacy & AI Concerns Spark Investigations
Taking Action Now
For Users:
- Be aware that AI conversations currently lack legal protections- Use temporary chat features where available- Never share sensitive personal information with general-purpose AI- Consider enterprise-grade AI tools with stronger privacy protections for professional use
For Policymakers:
- Establish AI privilege legislation immediately- Strengthen FTC enforcement of privacy commitments- Create clear regulatory frameworks for mental health AI- Fund public education about AI privacy risks
For Technologists:
- Implement privacy by design, not as an afterthought- Create transparent data handling policies- Build systems that earn trust through protection, not just performance- Advocate for industry-wide ethical standards
AI-Generated Voice Calls and Privacy: Navigating the Legal Landscape and Mitigating Risks
đ§ Related Podcast Episode
Conclusion: Empathy Demands Accountability
The promise of AI therapy is too important to abandon, but too dangerous to pursue without proper safeguards. Weâre not just building tools anymoreâweâre building companions that people trust with their deepest fears and highest hopes.
That trust comes with duties. If we want AI to heal, we must first ensure it does no harm. And that starts with recognizing a simple truth: when someone opens their heart to a machine, that vulnerability deserves the same protection weâve granted to human healers for centuries.
Altman called the current situation âvery screwed upâ and argued that âwe should have the same concept of privacy for your conversations with AI that we do with a therapistâ. Heâs right. The question is whether weâll act on that recognition before more people get hurt.
We canât let empathy be simulated while accountability stays optional. The time for AI privilege is now.