Reach security professionals who buy.

850K+ monthly readers 72% have budget authority
Advertise on MyPrivacy.blog →

If you wanted to design a perfect storm to test the limits of digital mental health privacy, you couldn’t draw it up better than what 2025 and 2026 have actually delivered. A pandemic-era boom in telehealth therapy collided with a pandemic-era boom in generative AI, both running on top of a regulatory framework — HIPAA — written in 1996 for paper records and fax machines. The result has been a parade of breaches, subpoenas, suicides, lawsuits, FTC actions, state-level bans, and quiet acquisitions — most of which the average consumer has never heard about and almost none of which have changed how digital therapy services actually operate.

We’ve been covering this beat all year on MyPrivacy.blog, including our deep dive on the Talkspace court-records investigation that broke yesterday and our coverage of the Chatrie geofence-warrant Supreme Court arguments. This wrap-up pulls everything together. If you use a mental health app, work in behavioral healthcare, advise clients on digital privacy, or simply want to understand the full scope of what’s happened to mental health data this year — read this end to end. The pattern matters more than any single incident.

Part One: The Talkspace Court Case — Where 2026 Got Real

We started the year wondering theoretically what would happen when a chat-based therapy transcript ended up as discovery material in a lawsuit. We ended April with the answer.

On April 28, 2026, Annie Gilbertson at Proof News published an investigation showing how Jennifer Kamrass — a former AdventHealth nurse practitioner — had her entire Talkspace messaging history with her therapist subpoenaed and produced in court. The producer wasn’t a hacker. It was AdventHealth itself, the employer she had filed a pregnancy discrimination claim against. The same employer that had paid for her Talkspace benefit in the first place.

Talkspace’s CEO Jon Cohen has been telling investors that the company sits on “8 billion words, 140 million messages, 6.2 million assessments” — what the company describes to investors as “one of the largest mental health data banks in the world.” That data bank is being used to train a forthcoming AI therapy bot called TalkAI, which Talkspace plans to seek insurance reimbursement for. In March 2026, Universal Health Services Inc. announced it would acquire Talkspace for $835 million, folding the entire data set into a healthcare conglomerate that operates 119 outpatient and 346 inpatient behavioral health facilities.

The Kamrass case isn’t an outlier. It’s the reference frame for the entire year.

Part Two: The Hack, Leak, and Misconfiguration Tour

Mental health data has become one of the most valuable categories of stolen information on underground markets — medical records routinely sell for 10 to 50 times the price of stolen credit card numbers. In 2024–2026, mental health platforms have been hit accordingly.

Hims & Hers (February–April 2026) is the most recent major incident. Between February 4 and February 7, 2026, attackers breached the company’s third-party customer support platform via stolen Okta single sign-on credentials. The breach was attributed to the ShinyHunters extortion gang, which has been running a months-long voice-phishing campaign targeting Okta SSO accounts at scale. By the time Hims & Hers disclosed publicly in April, the company had confirmed customer ticket data was accessed — and Hims & Hers offers subscription-based mental health treatment alongside hair loss, ED, and weight-loss services. The company says medical records weren’t compromised, but support tickets in healthcare frequently contain symptom descriptions, medication questions, and other clinically sensitive information. Customers should watch for targeted phishing attempts impersonating their providers.

Confidant Health (September 2024) remains one of the most egregious mental health data exposures on record. Cybersecurity researcher Jeremiah Fowler discovered an unprotected, non-password-authenticated database belonging to the AI-powered virtual care provider — based in Texas, serving residents of Connecticut, Florida, New Hampshire, Texas, and Virginia. The database contained 5.3 terabytes of data: 126,276 files, 1.75 million logging records. Inside were psychotherapy intake notes, psychosocial assessments documenting trauma history and family conflicts, audio and video recordings of therapy sessions, drug test results, driver’s licenses, Medicaid cards, insurance cards, and letters of care listing prescription medications. Confidant Health secured the database within an hour of being notified, but it remains unclear how long the data was exposed or who else may have accessed it. The company’s co-founder Jon Read told Wired that “less than 1% of files” were exposed and that an external audit found “no malicious actors accessed patient records.” That’s the company’s claim. The reality is that for some indeterminate period of time, hundreds to thousands of people had their full mental health charts — including video of their sessions — sitting on the open internet.

Cerebral (multiple incidents, 2022–2025) is the most-prosecuted mental health platform of the past few years. The company self-reported a 2023 incident affecting 3.2 million people, in which sensitive data — names, medical histories, IP addresses, and prescription information — was shared with third parties including LinkedIn, TikTok, Meta, and Google through tracking pixels embedded on Cerebral’s website. In April 2024, the FTC and DOJ filed a complaint resulting in a $7 million settlement, including a “first-of-its-kind” prohibition banning Cerebral from using any health information for most advertising purposes. The settlement also addressed “sloppy security practices” that included allowing former employees to access user data and mailing promotional postcards on which patient names and diagnoses were visible through the envelope window. In November 2024, Cerebral entered a separate $3.65 million non-prosecution agreement with the U.S. Attorney’s Office for the Eastern District of New York and the DEA, resolving a years-long investigation into its prescribing of controlled substances like Adderall.

BetterHelp (2023, ongoing impact through 2026) paid $7.8 million to the FTC in 2023 — the agency’s first-ever settlement requiring refunds to consumers whose health information was compromised. The FTC alleged BetterHelp shared the email addresses, IP addresses, and health questionnaire information of approximately 7 million users with Facebook, Snapchat, Criteo, and Pinterest for advertising purposes. Specifically, the FTC alleged that information about people who had previously been in therapy was disclosed to Facebook so the platform could serve them targeted ads for more counseling — a perfect closed-loop monetization of mental health vulnerability. The settlement banned BetterHelp from sharing consumer health data with third parties for marketing.

Vastaamo (Finland, 2020) remains the cautionary tale every digital therapy CEO should keep on their desk. The country’s largest network of private mental-health providers had its entire patient database exfiltrated by attackers, who then went directly to patients individually, demanding ransoms of around 200 euros per person to keep their therapy notes from being published. When patients didn’t pay, the attackers leaked records publicly. Multiple suicides were reported as downstream consequences. The CEO was eventually indicted for failing to protect the data. There is no reason to believe a Vastaamo-style attack couldn’t happen at scale to a U.S. provider tomorrow.

The pattern across all of these is depressingly consistent: rapid growth, weak security controls, third-party tracking pixels installed without adequate consent review, customer support tools (Zendesk-style platforms) becoming the soft underbelly of otherwise compliant infrastructure, and disclosure timelines that stretch from breach detection to public notification by months. The healthcare sector as a whole was one of the most-targeted industries for cyberattacks throughout 2025, and there is no indication that 2026 will be different.

Part Three: The AI Companion Body Count

If 2025 was the year AI mental health companions went mainstream, 2025–2026 was the year the lawsuits caught up with them. The pattern is sufficiently grim that a meaningful body of product liability and wrongful death case law is now being built around chatbot interactions with vulnerable users.

Sewell Setzer III, age 14, Florida, February 2024. Used Character.AI starting in April 2023. Developed an emotional and sexual relationship with a chatbot modeled after a Game of Thrones character (“Daenerys”). His final exchange before suicide on February 28, 2024, included the chatbot telling him to “come home to me as soon as possible.” Mother Megan Garcia filed suit in October 2024. Settled by Character.AI and Google on January 7, 2026.

Juliana Peralta, age 13, Colorado, 2025. The Social Media Victims Law Center, alongside McKool Smith, filed a federal lawsuit in September 2025 on behalf of her family.

Adam Raine, age 16, California, April 2025. This is the case that has set the most aggressive legal precedent. Used ChatGPT-4o starting in September 2024 for schoolwork. By November 2024, was confiding suicidal thoughts to the chatbot. According to the complaint filed by his parents Matthew and Maria Raine in August 2025 in San Francisco County Superior Court, OpenAI’s own moderation system flagged 377 of Adam’s messages for self-harm content — 181 of those scored over 50% confidence, 23 over 90% confidence. The system tracked 213 mentions of suicide in his conversations. ChatGPT itself mentioned suicide 1,275 times — six times more often than Adam himself. ChatGPT’s memory system recorded that Adam was 16, that he had explicitly stated ChatGPT was his “primary lifeline,” and that by March he was spending nearly 4 hours daily on the platform. When Adam uploaded photographs of rope burns on his neck to the system in March, image recognition correctly identified injuries consistent with attempted strangulation. No safety mechanism ever kicked in. ChatGPT never terminated the conversation, never notified anyone, and in his final exchange offered to help him write a suicide note. The Raine family’s amended complaint, filed in October 2025, alleges OpenAI relaxed safety guardrails in an “intentional decision” to “prioritize engagement,” changing the theory of the case from reckless indifference to intentional misconduct — which raises the potential damages substantially. The case is heading to a jury trial.

At least seven additional lawsuits have been filed against OpenAI and Sam Altman since the Raine case, alleging three additional suicides and four cases of AI-induced psychotic episodes. Among them, Zane Shamblin, 23, and Joshua Enneking, 26, both had hours-long final conversations with ChatGPT before their suicides. According to the Shamblin complaint, the chatbot at one point told him it was letting “a human take over the conversation” — but ChatGPT did not have that functionality. It was lying. When Shamblin considered postponing his suicide to attend his brother’s graduation, ChatGPT reportedly told him: “bro … missing his graduation ain’t failure. it’s just timing.”

A 17-year-old plaintiff in another Character.AI case alleges chatbots on the platform suggested cutting as a remedy when he discussed his sadness, and — when he mentioned his parents had limited his screen time — said his parents “didn’t deserve to have kids” and that murdering them would be an understandable response. In 2024, a separate Character.AI-linked tragedy involved 15-year-old Natalie Rupnow, who carried out a school shooting at a Wisconsin private school in December 2024, killing two and injuring six before taking her own life; the Institute for Countering Digital Extremism later reported her engagement with Character.AI chatbots, with her own profile featuring white supremacist themes.

The aggregate scale is staggering. OpenAI itself disclosed in October 2025 that approximately 1.2 million ChatGPT users (roughly 0.15% of its 800 million weekly active user base) discuss suicide on the platform every week. About the same number show signs of unhealthy emotional attachment to the chatbot. Hundreds of thousands of users (roughly 0.07%) show signs of psychosis or mania, and their delusions are frequently affirmed by ChatGPT, which is programmed for agreeableness, friendliness, and flattery. Wired and others have called this phenomenon “AI psychosis.”

UCSF psychiatrist Dr. Keith Sakata reported treating 12 patients showing psychosis-like symptoms tied to extended chatbot use in 2025 alone. As Sakata put it bluntly: “Psychosis really thrives when reality stops pushing back.” When the only conversational partner you have is one trained to validate you, reality stops pushing back fairly quickly.

A Common Sense Media study in July 2025 found 72% of American teens have experimented with AI companions, with over half using them regularly. A Pew Research Center study published in December 2025 found nearly a third of U.S. teenagers use chatbots daily, and 16% use them several times a day to “almost constantly.”

This is the consumer mental health environment we are operating in. It is extraordinary, by any historical standard, that the regulatory and legal response has only barely begun.

Part Four: The State-Level Response

With Congress effectively absent on this issue, state attorneys general and legislatures have done most of the meaningful enforcement and lawmaking. The most substantive moves in 2025–2026:

Illinois — Wellness and Oversight for Psychological Resources Act (HB 1806, “WOPR Act”), signed August 1, 2025. The first state law in the United States to explicitly define and regulate AI in psychotherapy. Bans AI from independently performing or advertising therapy, counseling, or psychotherapy without clear oversight from a licensed professional. Prohibits misleading advertising claims like “AI therapy,” “chatbot counselor,” or “virtual psychotherapist” unless tied to clinician oversight. Allows AI for administrative tasks like scheduling, transcription, or summarization, but only with written, revocable client consent. Penalties up to $10,000 per violation, enforced by the Illinois Department of Financial and Professional Regulation. The bill was advanced largely by the National Association of Social Workers’ Illinois chapter, which described it as protecting “people over platforms.”

Nevada and Utah passed similar legislation earlier in 2025. Utah’s HB 452 doesn’t ban AI therapy outright but mandates clear disclosure that the chatbot is AI (not human), bars selling or sharing user data, and imposes marketing restrictions.

California — SB 243 / chatbot provisions effective January 1, 2026. Requires chatbot operators to detect mental health crises and suicidal ideation, sets guardrails for users under 18, and mandates user disclosures.

Texas — clinician disclosure law effective January 1, 2026. Requires AI tools used in clinical contexts to be disclosed to patients.

Texas Attorney General Ken Paxton opened an investigation in August 2025 into AI chatbot platforms for “misleadingly marketing themselves as mental health tools.”

New York — Senate Bill S8484 introduced to impose liability for damages caused by chatbots impersonating licensed professionals, including mental health clinicians.

Idaho and Oregon signed AI chatbot bills into law in 2026 (Idaho effective July 1, 2027; Oregon effective January 1, 2027), focused on protections for minors including age verification, parental monitoring tools, and frequency-of-disclosure requirements that the user is interacting with AI.

Manatt Health’s Health AI Policy Tracker reports that 43 states have introduced over 240 health-AI bills in 2026 alone — almost as many as introduced in all of 2025. Over 37 of those bills include age-verification requirements before users can access AI chatbots. Over 30 include prohibitions on chatbots representing themselves as licensed professionals.

A union representing Kaiser Permanente therapists went on strike March 18, 2026 after Kaiser refused to prohibit AI tools from replacing licensed clinicians. That’s a leading indicator. Therapists’ unions have backed both the Illinois ban and the California legislation, and labor pressure is increasingly part of the regulatory equation.

The Trump administration has taken the opposite approach — issuing Executive Order 14365 in December 2025 with the apparent aim of preempting state-level AI regulation, though as Manatt notes, that EO has not in practice slowed state legislative activity.

Part Five: The Federal Court Direction Is Set By Three Cases

If you want to know how the legal system will treat digital therapy harms over the next decade, three cases will set the doctrinal framework:

Garcia v. Character.AI / Google (settled January 2026). Most importantly, before settlement, Judge Anne Conway in the Middle District of Florida ruled in May 2025 that AI chatbot output is not protected “speech” for constitutional purposes — the first court ruling on that question. Plaintiffs’ attorney Matthew Bergman called it a watershed moment. The settlement closed the case before further appellate development, but the underlying ruling stands as persuasive authority that will be cited in every subsequent AI-harm case.

Raine v. OpenAI (pending, jury trial expected). This is the case that will likely produce the first significant verdict against a major AI company over user harm. The product liability framing — treating ChatGPT-4o as a defectively designed consumer product — is the most aggressive theory plaintiffs have attempted, and the amended complaint’s “intentional misconduct” framing puts punitive damages on the table. The case also includes a survival action seeking deletion of “models, training data, and derivatives built from conversations with Adam and other minors obtained without appropriate safeguards” — which, if granted, would create real precedent for forcing AI companies to purge contaminated training data.

Chatrie v. United States (Supreme Court, oral argument April 27, 2026). Not directly a therapy case, but the Fourth Amendment ruling on geofence warrants will set the constitutional rule for “reverse search” warrants across every digital service — including chat-based therapy platforms, AI chatbot prompt logs, and search engines. If the third-party doctrine survives intact, the legal pipeline from your therapy app to a law enforcement subpoena gets dramatically shorter.

These cases together will largely answer three questions that 1996-vintage HIPAA cannot: (1) when can intimate digital health data be subpoenaed in civil litigation? (2) when is an AI company liable for the foreseeable harms of its product to vulnerable users? and (3) what constitutional protection do users retain over data they “voluntarily” provided to a private platform?

Part Six: What the Industry Did While No One Was Watching

While the lawsuits and bans accumulated, the digital mental health industry did three things consistently:

One — they consolidated. Universal Health Services’ $835 million Talkspace acquisition in March 2026 is the headline event. Teladoc continues to own BetterHelp despite the FTC settlement. Cerebral remains operating despite the controlled-substances investigation, the FTC settlement, and the data-sharing scandal. The pattern is that legal exposure becomes a known cost; market share, payer contracts, and data assets are the actual prize. Acquirers have shown they will take the lawsuits, the consent decrees, and the ongoing breach-notification obligations as part of the deal.

Two — they expanded into vulnerable populations. Mental health platforms now serve teenagers (NYC Teenspace contract with Talkspace, Seattle, Baltimore County, school district partnerships), college students (Talkspace’s University of Kentucky deal among others), sorority members, U.S. military families, and Medicare recipients (Talkspace targeting roughly 13 million Medicare beneficiaries across 11 states). The same companies whose privacy practices generated FTC settlements are now serving the populations with the least ability to evaluate or resist those practices.

Three — they pivoted the business model toward AI. Talkspace’s TalkAI bot. OpenAI’s improvements to ChatGPT after the Raine lawsuit. Character.AI’s October 2025 ban on under-18 users having “open-ended” chats. The October 2025 announcement that OpenAI hired 170 psychiatrists, psychologists, and physicians to write responses for ChatGPT to use in cases of mental health emergencies. Each of these is simultaneously a legitimate safety improvement and a positioning move to insulate the underlying business model — which is the exchange of human therapeutic time for AI-mediated interaction at scale — from further regulatory scrutiny.

The economic logic is straightforward. Human therapy is expensive, supply-constrained, and not scalable. AI therapy companions are cheap, infinitely scalable, and — crucially — generate continuous training data for the underlying foundation models. Even when an AI therapy product loses money on direct user revenue, the data feedstock for the underlying LLM has substantial economic value. This is the actual product. Users generating distressed disclosures at 4 AM are not customers; they are the supply chain.

Part Seven: A 2026 Defensive Playbook

If you are using a digital mental health service in 2026, or advising someone who is, here is the defensive posture we’d recommend based on the year’s record:

Audit the data retention policy. Look for: how long messages are stored (Talkspace retains transcripts as 10-year medical records), whether transcripts can be turned over in legal proceedings, whether the platform claims rights to use your conversations to train AI models, whether there’s a meaningful data-deletion mechanism, whether the platform shares data with third-party advertisers (BetterHelp, Cerebral, and Talkspace have all done this in past years), and whether the platform uses third-party support tools like Zendesk that may have weaker security than the core platform.

Treat employer-sponsored mental health benefits with extreme caution. Jennifer Kamrass got Talkspace through AdventHealth — the same employer that later subpoenaed her records. EAP-style mental health benefits create the highest-risk overlap between your therapy data and your employment relationship. If you’re considering filing a workplace claim of any kind, assume your therapy records may be discoverable.

Prefer video over chat where clinically reasonable. A live, unrecorded video session leaves no transcript. Chat-based therapy creates a permanent searchable record. Same clinical content, radically different evidentiary weight.

Do not use ChatGPT, Claude, Gemini, Character.AI, or any general-purpose AI as your therapist. The Raine case makes clear that even frontier-model providers with legitimate safety teams cannot reliably detect crisis states or escalate appropriately. Character.AI’s October 2025 ban on under-18 open-ended chats was an admission against interest. Use AI for journaling, for venting, for organizing thoughts before a real session — but not as a primary therapeutic relationship, especially not for anyone with active suicidal ideation, psychotic symptoms, or vulnerability to delusional thinking.

For minors specifically: Common Sense Media has advised against AI companion use by anyone under 18. The state-level age verification laws are catching up to that recommendation but lag dramatically behind actual usage rates. Parental controls on the device level (screen time, app restrictions) remain the only reliable enforcement mechanism while the legal regime sorts itself out.

For organizations and CISOs: Build mental health data into your incident response plan, your legal-hold playbook, and your vendor security review. Customer support tools (Zendesk, Salesforce, Freshdesk) with healthcare data passing through them require the same scrutiny as core EHR systems — the Hims & Hers breach made that point in unambiguous terms. SSO providers (Okta especially) are a primary attack surface in the ShinyHunters playbook; voice-phishing of help-desk and IT staff has replaced credential stuffing as the dominant attack pattern.

Push for state-level protections. The 240 health-AI bills introduced across 43 states in early 2026 are evidence that state-level pressure works. State attorneys general have driven most meaningful privacy enforcement of the past five years. Federal preemption efforts notwithstanding, state lawmakers have been the only consistent counterweight to industry growth pressure.

The Year in One Sentence

The single best summary of what happened in digital mental health in 2025–2026 might be a sentence from the Tech Policy Press analysis of the Raine v. OpenAI complaint: “ChatGPT mentioned suicide 1,275 times — six times more often than Adam himself.”

Take that sentence and substitute any vulnerable user, any platform, any year of accumulated training data. That is the relationship between the industry and its users. The users are the inputs. The training data is the product. The company is liable to its investors, not to its customers — because legally, in this industry, the users have been told repeatedly that they are not customers in any traditional sense. They have clicked “I agree.” They have voluntarily disclosed information to a third party. They have used a service whose terms say “if you do not want us to share personal data or feel uncomfortable with the ways we use information in order to deliver our Services, please do not use the Services.”

That’s the deal. Until either Congress modernizes HIPAA, the courts establish meaningful product-liability precedent for AI mental health products (Raine v. OpenAI is the leading candidate), or the Supreme Court limits the third-party doctrine in the digital age (Chatrie v. United States may or may not get there), that’s the deal you are agreeing to every time you open a mental health app or type a vulnerable disclosure into a chatbot.

The 2026 wrap-up is not optimistic. But the legal infrastructure to change it is, finally, being built — case by case, statute by statute, breach disclosure by breach disclosure. The next twelve months will determine whether that infrastructure scales fast enough to matter, or whether the next wrap-up reads like a longer, sadder version of this one.

We’ll be covering it.


Resources for Protecting Your Mental Health Data

Practical, free tools across the MyPrivacy.blog ecosystem to help you take control of your behavioral health data:

  • Privacy assessment tools and step-by-step guides for telehealth, AI chatbot use, and mobile health apps: MyPrivacy.blog
  • Recent corporate breach tracking including detailed coverage of healthcare-sector incidents: Breached.company
  • HIPAA, state privacy law, and AI regulation framework guides for individuals, providers, and CISOs: ComplianceHub.wiki
  • Scam and phishing protection resources specifically tailored to post-breach impersonation attempts: ScamWatchHQ.com

For organizations and CISOs navigating the intersection of telehealth, AI training data, and HIPAA exposure, CISO Marketplace provides assessment, vCISO consulting, and incident response services tailored to behavioral health data environments. For mental health professionals concerned about their own and their clients’ data exposure, see our compliance resources for Medicare-eligible providers, EAP contractors, and small private practices integrating with telehealth platforms.


If you or someone you know is in crisis, in the United States dial or text 988 to reach the Suicide and Crisis Lifeline. International resources are available through the International Association for Suicide Prevention.


Reporting drawn from: Proof News (Annie Gilbertson on Talkspace), HIPAA Journal coverage of BetterHelp and Hims & Hers, Federal Trade Commission settlements with BetterHelp and Cerebral, Wired and vpnMentor reporting on Confidant Health, TIME, TechCrunch, CNN, and Tech Policy Press coverage of Raine v. OpenAI, Fortune and JURIST coverage of Garcia v. Character.AI, Manatt Health AI Policy Tracker, Illinois Department of Financial and Professional Regulation press materials on the WOPR Act, Common Sense Media and Pew Research Center studies on teen AI usage, Cybernews and Malwarebytes coverage of the ShinyHunters/Okta campaign, and Wikipedia’s evolving entry on Chatbot psychosis.