Reach security professionals who buy.

850K+ monthly readers 72% have budget authority
Advertise on MyPrivacy.blog →

There’s a particular kind of nightmare that only exists in the digital age. You confide in a therapist about your marriage, your finances, your self-esteem — the stuff you would never say out loud at a dinner party, much less in a courtroom. Two years later, every keystroke of that confession is sitting in front of opposing counsel, marked as Exhibit A.

This isn’t a hypothetical. This is what happened to Jennifer Kamrass.

A new investigation by Annie Gilbertson at Proof News, published April 28, 2026, lays out how a former AdventHealth nurse practitioner had her entire Talkspace messaging history with her therapist subpoenaed and produced in court — by the employer she had filed a pregnancy discrimination claim against. The therapist she trusted was forced to watch every back-and-forth she’d had with her client become discovery material. “When I came to understand how much information they had, I was shocked,” the therapist told Proof News. She declined to be named, citing concerns about her professional reputation.

This story matters far beyond one lawsuit. Talkspace’s CEO Jon Cohen recently told investors the company has compiled “8 billion words, 140 million messages, 6.2 million assessments” — what executives have described to investors as “one of the largest mental health data banks in the world.” And the end goal, according to the company’s own investor disclosures, is training an AI therapy companion bot called TalkAI, which the company plans to seek insurance reimbursement for.

You read that right. Your worst Tuesday night is becoming someone else’s Series B.

The Mechanics of How Therapy Becomes Evidence

In a brick-and-mortar therapy practice, the legal trail is thin by design. A therapist might jot down a few sentences of progress notes after a 50-minute session. There’s no transcript. There’s no recording. If a court subpoenas your treatment records, what comes out is heavily abstracted — and HIPAA’s special carve-out for “psychotherapy notes” gives those handwritten observations elevated protection requiring explicit patient consent before disclosure.

Talkspace flipped that model. By making the medium itself a chat thread, the company transformed every “I think I might be having a panic attack” and “I’m scared my husband is cheating on me” into permanent, timestamped, fully searchable text. The transcript IS the treatment. The transcript IS the medical record.

And that medical record, as Kamrass discovered, is discoverable.

Her case began in 2021 when AdventHealth terminated her nurse practitioner role at nearly nine months pregnant. AdventHealth offered Talkspace as an employee mental health benefit — so when Kamrass started messaging a therapist about her anxiety over supporting her family and finding work right before giving birth, she was using a service her employer had paid for. She later filed a pregnancy discrimination claim. (A federal judge eventually ruled for AdventHealth, accepting the company’s argument that the termination was tied to a facility closure for financial reasons.) Her therapist had agreed to testify on her behalf — and the employer’s lawyers responded by securing a court order for her complete Talkspace records.

“You’ve now got written evidence of everything discussed and in a normal therapy session that wouldn’t be true,” said Peter Andreone, one of Kamrass’ attorneys. “They turned around and used it against her.”

That’s the trap. The convenience that made Talkspace a hit — text your therapist from bed, from your car, from a Target parking lot — is the same architecture that creates a forensic-grade record of your inner life.

The “Anonymized” Dodge

Talkspace executives consistently assure investors the data they’re hoarding is anonymized. As if that closes the conversation. It doesn’t.

“We know that information that’s been anonymized can very easily be reidentified,” Tori Noble, a staff attorney at the Electronic Frontier Foundation, told Proof News. “[HIPAA] is not enough protection.”

The reidentification problem is well-documented in the privacy research community. With enough behavioral context — and a transcript of personal disclosures is thick with behavioral context — even data stripped of names, addresses, and dates of birth can be matched back to specific people through cross-referencing with other datasets. Mention your employer, your kid’s school, the neighborhood where your in-laws live, and a vague description of your last vacation, and you’ve effectively signed your name on every page.

HIPAA, the law most Americans assume is the iron wall protecting their health information, requires “deidentification” for data sharing. The Department of Health and Human Services even gives “psychotherapy notes” extra protection, requiring patient consent before disclosure for most purposes. But the reidentification gap is real, the cybersecurity threat surface in healthcare is enormous, and the law was written before any reasonable person imagined a private company would be sitting on 140 million message exchanges of intimate disclosures.

The healthcare sector is one of the most-targeted industries for cyberattacks. A breach of Talkspace’s data lake wouldn’t just expose names and Social Security numbers — it would expose people’s worst fears, their unspoken thoughts about their spouses, their suicidal ideation, their custody battles, their drug use, their affairs. The blast radius would be staggering.

A Pattern of Privacy Failures

The Kamrass case isn’t the first time Talkspace has been in the privacy hot seat. The pattern stretches back nearly a decade and reads like a tour through every modern data privacy failure mode:

2020 — The data mining allegations. A New York Times investigation reported that former employees and therapists alleged Talkspace data scientists were reviewing anonymized client transcripts and pulling out frequently used phrases to feed the marketing team’s targeting models. Talkspace denied the allegations, but the company also faced reports that “trainers” who weren’t clinicians were reading therapist–client message exchanges to enforce sales scripts.

2018–2020 — The PsiAN libel suit. When psychologist Linda Michaels and her co-founders at the Psychotherapy Action Network wrote to the American Psychological Association raising concerns about Talkspace’s privacy practices, Talkspace responded with a $40 million libel suit. Michaels described the playbook to Proof News bluntly: “This was just a bullying tactic to try to get us to shut up.” The case was eventually dismissed on jurisdictional grounds.

2022 — The Senate letter. Senators Elizabeth Warren, Cory Booker, and Ron Wyden wrote to Talkspace expressing concerns about whether patient data was being shared with companies like Google and Facebook. The company’s chief legal officer responded that “all data related to their treatment is strictly used for therapeutic purposes.”

2024 — The NYC teens controversy. Parent advocates accused Talkspace of sharing personal information of New York City teenagers — through a city contract providing free therapy to 13-to-17-year-olds — with Facebook, Amazon, Meta, Google, and Microsoft via website trackers. Talkspace agreed to amend its data collection policy.

2024 — The TikTok tracker class action. A class-action lawsuit alleged Talkspace embedded TikTok’s “fingerprinting” software on its website, allegedly transmitting visitor data — including device details, geographic information, and medical information about minors — to TikTok before users even cleared the cookie banner. The plaintiff withdrew the suit in September 2025, but the underlying tracker behavior had already happened.

Each of these episodes individually might be defensible as a misunderstanding, a cleanup, a settled question. Together they describe an institution that has built its entire business model around extracting value from extremely sensitive data — and that has fought, lawyered up, and minimized at every step when challenged on it.

The AI Therapy Bot Endgame

Here’s where the story gets darker. Talkspace isn’t just sitting on this mountain of intimate disclosures for nostalgia.

CEO Jon Cohen has been telling investors that Talkspace is “strategically positioned… to be a leader in the application of AI to mental health” and that the company is preparing to release a “therapy companion” chatbot called TalkAI later this year, with plans to eventually secure insurance reimbursement for the automated tool. The training corpus? Those 140 million messages. Those 8 billion words. Every one of which was typed by a real person who believed they were having a confidential clinical conversation.

Talkspace’s privacy policy does disclose that the company uses chat, audio, and video communications to “develop new products.” European users get an explicit opt-out under EU privacy law. American users get a different deal: “If you do not want us to share personal data or feel uncomfortable with the ways we use information in order to deliver our Services, please do not use the Services.” That’s the legal hook — keep using us, you’ve consented; don’t like it, don’t come.

Research has consistently shown that the overwhelming majority of users never read terms of service agreements. Jodi Halpern, a UC Berkeley bioethics professor who studies chatbots, made the point cleanly to Proof News: “It’s very different than in actual human therapy where there’s a lot of training about the informed consent process. They just click it.”

And it gets worse. Therapists Proof News interviewed expressed concern that companion chatbots aimed at mental health users carry real risk of harm — and ultimately could be aiming to “replace therapists altogether.” That isn’t speculative. Google and Character.AI agreed to settle a lawsuit brought by the mother of 14-year-old Sewell Setzer III, alleging the Florida teen’s last conversation on Character.AI encouraged his suicide. Similar cases have been reported in Florida, Colorado, Oregon, and California involving AI chatbots and people in crisis.

Some states are responding. Illinois banned therapy bots last year. A California legislator introduced similar protections in January 2026. Unions representing therapists have backed both efforts. A union of Kaiser Permanente therapists went on strike March 18, 2026, after Kaiser refused to prohibit AI tools from replacing licensed clinicians.

Meanwhile, Universal Health Services Inc. announced in March it would acquire Talkspace for $835 million — folding the data bank, the chatbot ambitions, and the 200 million eligible patients into a healthcare conglomerate that operates 119 outpatient and 346 inpatient behavioral health facilities. That’s vertical integration in a sector where the raw material is human distress.

What This Means for Anyone Using a Mental Health App

If you take only one thing from the Kamrass case, take this: when you use a chat-based therapy app, the chat IS the medical record, and the medical record is subpoenable. The convenience comes packaged with a forensic risk that does not exist in traditional therapy.

That doesn’t mean digital mental health is uniformly bad — for many people in rural areas, with mobility limitations, with insurance constraints, or with stigma concerns, telehealth has been a genuine breakthrough. But the architecture matters. There is a meaningful difference between a video session that isn’t recorded and an asynchronous chat thread that creates a permanent transcript.

A few practical defensive moves if you’re currently using or considering a mental health app:

Read the data retention and disclosure policy before you sign up. Specifically look for: how long messages are stored, whether transcripts can be turned over in legal proceedings, whether the platform claims rights to use your conversations to train AI models, and whether there’s a data deletion mechanism. Talkspace, for example, retains transcripts as medical records for 10 years — and US users can request deletion but cannot opt out of the company’s product development uses.

Treat employer-sponsored mental health benefits with extreme caution. If your therapy is being paid for through your employer’s EAP or insurance plan, ask very pointed questions about who can subpoena what in the event of any future employment dispute. Kamrass got Talkspace through AdventHealth — the same employer who later subpoenaed the records.

Consider video over chat where possible. A live video session that isn’t recorded leaves no transcript. A messaging session leaves a permanent searchable archive. The same clinical content carries radically different evidentiary weight depending on which medium you choose.

Assume reidentification is possible. Don’t take “we anonymize the data” as meaningful protection. Behavioral richness in transcripts makes reidentification far easier than people assume.

Push for state-level protections. The Illinois therapy bot ban and the California proposal are early signals. State attorneys general and legislators have driven most of the meaningful privacy enforcement of the past five years (TikTok tracking, EU-style protections in California, Texas and Washington biometric laws). Mental health data deserves at minimum the same treatment.

The Bigger Picture: HIPAA Was Built for Another World

The deeper story in the Kamrass case is that HIPAA — the law most Americans treat as the bedrock of medical privacy — was written in 1996. It was designed for a world where medical records were paper files in a manila folder and information sharing happened by fax. It was not designed for a world where a private company can amass 140 million conversations and use them to train a commercial AI product, then market that AI product back to the very healthcare system whose patients generated the training data in the first place.

HIPAA’s deidentification standard, its “psychotherapy notes” carve-out, and its breach notification rules are doing what they can. But none of those provisions contemplated the reverse-search world we now live in, where intimate disclosures become training data, where anonymized records are reidentifiable through behavioral cross-reference, and where the lines between healthcare provider, technology vendor, and AI developer are permanently blurred.

The Kamrass case is, in that sense, a preview rather than an outlier. Every chat-based health app — for therapy, for fertility tracking, for substance use treatment, for chronic illness management, for sexual health — is sitting on a similar archive. Every one of those archives is one subpoena, one breach, one acquisition, or one terms-of-service update away from doing exactly what Talkspace’s data did to Jennifer Kamrass.

Her therapist’s words deserve to be the closing thought:

“It is really taking advantage of vulnerable people at a vulnerable time of their life.”

That’s the business model. That’s the AI training data. And until either Congress modernizes HIPAA, the courts limit how mental health transcripts can be used in civil litigation, or users walk away from chat-based therapy in numbers large enough to matter — that’s the future every digital mental health user is signing up for, one click of “I agree” at a time.


Protect Your Privacy Beyond Therapy Apps

The same data exposure pattern Talkspace built into chat therapy is being replicated across consumer healthcare. If you want to dig deeper into how to defend yourself:

  • Healthcare data breach tracking — Recent corporate data exposures and what to do if your information is included: Breached.company
  • Compliance and regulatory framework guides — The actual text and intent of HIPAA, GDPR, state-level privacy laws, and how they apply: ComplianceHub.wiki
  • Privacy assessment tools — Free tools to evaluate your specific privacy exposure across health, biometrics, and digital identity: MyPrivacy.blog Privacy Tools

For organizations and CISOs navigating the intersection of telehealth, AI training data, and HIPAA exposure, CISO Marketplace provides assessment, vCISO consulting, and incident response services tailored to behavioral health data environments.


Sources: Proof News reporting by Annie Gilbertson, Talkspace investor disclosures, U.S. Senate letters from Sens. Warren, Booker, and Wyden, Department of Health and Human Services HIPAA guidance, Electronic Frontier Foundation, Psychotherapy Action Network, Universal Health Services acquisition announcement, and previous reporting from The New York Times, Salon, and Class Action.org.