A veteran NPR host says Google stole his voice for NotebookLM. Whether he wins or loses, the case exposes a privacy gap that affects us all.
Meta Description: NPR host David Greene is suing Google for voice cloning. Learn how AI voice theft works, what laws protect you (AB 2602, AB 1836), and how to safeguard your voice data.
When David Greene listened to Googleâs NotebookLM for the first time last fall, he experienced something deeply unsettlingâhearing himself speak words he never said.
âI was, like, completely freaked out,â Greene told the Washington Post. âItâs this eerie moment where you feel like youâre listening to yourself.â
The former NPR Morning Edition host, whose warm baritone woke up 13 million Americans daily for eight years, is now suing Google in California Superior Court. He claims the tech giant replicated his distinctive voice for NotebookLMâs popular âAudio Overviewsâ featureâwithout asking permission or paying a cent.
Google denies it, claiming the voice belongs to a paid professional actor.
But hereâs what should concern you, regardless of whoâs telling the truth:
We now live in a world where AI can potentially capture and clone your voice convincingly enough that even your family canât tell the difference. And the laws protecting you? Theyâre playing catch-up.
Think youâre safe because youâre not famous? Consider this:
Every time you say âHey Googleâ or âAlexa,â youâre creating voice samples. Every customer service call. Every Zoom meeting. Every voice message. That data goes somewhereâand itâs increasingly being used to train AI voice models.
You might never know if your voice contributed to an AI system. You probably already âconsentedâ in some terms of service agreement you didnât read. And even if you found out, proving it and doing anything about it would be nearly impossible.
David Greeneâs lawsuit matters because it forces the question: In the age of voice AI, who owns your voice? And what happens when technology moves faster than our ability to consent to how itâs used?
Your Voice Is Not Just SoundâItâs You
Think about what makes your voice yours. Not just the sound waves, but the pauses. The rhythm. The way you emphasize certain words. The little verbal tics youâve tried to eliminate and canât. The warmth (or edge) that comes through when youâre talking to someone you love (or canât stand).
David Greene spent two decades crafting his voice into something distinctive. He grew up in Pittsburgh idolizing baseball announcer Lanny Frattare. In high school, he turned morning announcements into a radio show. He wrote his college application essay about wanting to be a public radio host. He learned the intimate art of speaking to millions as if talking to one friend.
âMy voice is, like, the most important part of who I am,â Greene has said.
And heâs rightâscientifically speaking.
Your voice is biometric data. Like your fingerprint. Like your iris scan. Like the unique pattern of blood vessels in your palm. Voice biometrics are increasingly used for authenticationâyour bank may use your voice to verify your identity. Some airports are experimenting with voice-based security. Your voice unlocks your phone.
This makes voice fundamentally different from, say, your hairstyle or fashion sense. Itâs not something you chose or can easily change. Itâs a biological signature, as unique as DNA, that you carry with you every time you open your mouth.
And unlike DNA, you spray it everywhere. Every phone call. Every voice memo. Every Zoom meeting. Every voice message to a friend. Every time you talk to Alexa, Siri, or Google Assistant.
Which raises an uncomfortable question: Who owns all those voice samples? And what can they do with them?
The NotebookLM Phenomenon: How an AI Podcast Tool Sparked a Voice Rights Lawsuit
Googleâs NotebookLM seemed innocent enough when it launched in 2024. Itâs an AI tool that transforms documents into conversational, podcast-style summaries. Upload a research paper, and two AI hostsâone male, one femaleâwill banter their way through the key points, making dense material accessible.
The feature became a âsleeper hitâ for Google in the AI arms race. By December 2024, Spotify had integrated it into Spotify Wrapped, offering personalized AI-hosted podcasts about your listening habits to millions of users.
People loved it. They also noticed something odd about the voices.
âSo ⌠Iâm probably the 148th person to ask this,â a former colleague emailed Greene last fall, âbut did you license your voice to Google? It sounds very much like you!â
Greene wasnât the only one people thought they heard. Online speculation linked the voices to various podcasters and radio hosts. But the Greene comparisons kept piling up. His wifeâs âeyes poppedâ when she heard it. Friends, family, and professional contacts reached out asking if heâd made a deal.
He hadnât.
The 53% Confidence Problem: Why Proving Voice Theft Is So Hard
Greeneâs lawsuit, filed in January 2026 with the powerhouse law firm Boies Schiller Flexner, makes several specific claims:
- Google replicated his distinctive voice without payment or permission- The AI mimics his cadence, intonation, and speech rhythms- It even copies his use of filler wordsâthe âuhhsâ and âlikesâ heâs worked for years to minimize
To support these claims, Greeneâs team hired an AI forensic firm to analyze the artificial voice against recordings of the real Greene. Their conclusion: 53% to 60% confidence that Greeneâs voice was used to train the model.
What does that percentage actually mean?
Think of it like a DNA match probability, but much less reliable. AI forensic analysts compare acoustic featuresâpitch patterns, vocal timbre, speech rhythm, pronunciation quirks. A 53-60% confidence level means âmore likely than not, but far from certain.â
For AI voice analysis, experts say thatâs actually ârelatively high.â Why? Because AI-generated voices donât contain direct copies of training data. They synthesize patterns learned from potentially thousands of voices. Detecting one specific voice contributor in that mix is like identifying one specific coffee bean in a blended cup.
And thatâs the problem: the burden of proof falls on the victim.
We donât have reliable forensic methods to determine whose voice went into training an AI. The major AI companies treat their training data as closely guarded trade secrets:
- Google wonât disclose exactly how it created NotebookLMâs voices- OpenAI wonât reveal what data trained ChatGPTâs voice features- These âblack boxesâ remain intentionally opaque
The practical barrier: Even if your voice was cloned, proving it requires:
- Suspecting it happened in the first place (How would you know?)2. Accessing the AI system to hear the similarity (If itâs behind a paywall or regional restriction?)3. Hiring expensive forensic experts (Most people canât afford this)4. Overcoming the âit could be coincidenceâ defense
This asymmetry favors the AI companies. They have all the data about what went into their training. You have none. And theyâre not volunteering it.
The Celebrity Precedents: From Bette Midler to Scarlett Johansson
Voice appropriation lawsuits arenât new. Whatâs new is the AI dimension.
The Bette Midler Case: Midler v. Ford Motor Co. (1988)
Back in 1988, Bette Midler refused Ford Motor Companyâs request to sing âDo You Want to Danceâ for a commercial. Ford hired one of Midlerâs former backup singers to imitate her distinctive style instead. The district court initially dismissed Midlerâs case, finding no legal basis for her claim.
But the Ninth Circuit Court of Appeals reversed, establishing a landmark precedent. The court ruled that when a âdistinctive voice of a professional singer is widely knownâ and is âdeliberately imitated in order to sell a product, the sellers have appropriated what is not theirs.â
Critically, the court recognized that California law protects against appropriation of identityâand voice can be a component of identity as recognizable as a face.
This became the foundation for modern voice rights in America. Itâs why advertisers canât simply hire soundalikes for celebrity voices without consequences. Itâs why you sometimes see the fine print: âCelebrity voice impersonated.â
Why this matters for Greeneâs case: The legal principle from Midler requires proving (1) the voice is distinctive and widely known, and (2) it was deliberately appropriated. Greeneâs challenge is demonstrating both elements apply to an AI-generated voice.
The Scarlett Johansson Wake-Up Call (May 2024)
Then came the incident that made headlines worldwide.
OpenAI CEO Sam Altman approached Scarlett Johansson nine months before launching GPT-4oâs voice features. He wanted to license her voiceânot coincidentally, Johansson had voiced the AI companion in the 2013 film Her. She declined.
Two days before the product announcement, Altman tried again. Johansson hadnât responded. OpenAI unveiled GPT-4o anyway, complete with a voice called âSkyâ that sounded strikingly like Johansson.
After the announcement, Altman posted just one word on X: âHer.â
Johansson was not amused.
âI was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,â she said in a statement.
OpenAI claimed Sky was based on a different professional actress and paused its use âout of respect for Ms. Johansson.â No lawsuit was filed. The voice disappeared. But the message was clear: AI companies were willing to push boundaries on voice appropriation, even when explicitly told âno.â
Why David Greeneâs Case Is Different
Greene isnât Bette Midler or Scarlett Johansson. Heâs famousâbut famous in a specific way. Heâs a voice that millions recognized every morning without necessarily knowing his name or face.
This creates a fascinating legal question, as Cornell Law Professor James Grimmelmann notes: Courts must determine not just whether the voice sounds similar, but whether Greene is âfamous enough for ordinary people to recognize it.â
Thereâs also the matter of harm. Midler lost a commercial opportunity. Johansson saw her likeness attached to a product she explicitly refused. Greeneâs situation is more nuanced.
Heâs concerned about his voice saying things he never would. âI read an article in the Guardian about how this podcast tool can be used to spread conspiracy theories and lend credibility to the nastier stuff in our society,â Greene told reporters. âFor something that sounds like me to be used in service of that was really troubling.â
His former NPR colleague Mike Pesca put it more bluntly: âThey used it to make the podcasting equivalent of AI âslopâ⌠They have banter, but itâs very surface-level, un-insightful banter, and theyâre always saying, âYeah, thatâs so interesting.â Itâs really bad.â
For someone who built a career on the art of thoughtful conversation, that association stings.
Californiaâs New Voice Protection Laws
California, often the first mover on privacy issues, enacted two significant AI voice and likeness laws that took effect January 1, 2025.
AB 2602: Protecting Living Performersâ Digital Replicas
Assembly Bill 2602, signed by Governor Newsom in September 2024, prevents unauthorized use of digital replicas in place of live performances. Key provisions:
Contract Requirements: If youâre signing a contract that involves creating or using your digital replica, it must include:
- A âreasonably specific descriptionâ of the intended use- Either legal counsel representation with clear commercial terms youâve signed, OR- Protection through a labor union collective bargaining agreement addressing digital replicas
Who It Protects: The law specifically targets performersâactors, voice artists, musiciansâwhose livelihoods depend on their voices and likenesses.
Why It Matters: Prevents studios from hiring you once, scanning your voice and likeness, then using your AI clone indefinitely without additional compensation or approval.
AB 1836: Protecting Deceased Personsâ Digital Replicas
Assembly Bill 1836 extends protection beyond the grave, granting estates control over a deceased personâs digital replica for 70 years after death.
Key Provisions:
- Companies cannot use a deceased personâs voice or likeness without estate authorization- Minimum statutory damages: greater of actual damages or $10,000 per violation- Fair use exemptions exist for news, documentary, satire, and parody
Real-World Impact: Want to use James Dean in a CGI film or recreate Marilyn Monroeâs voice for a commercial? You need estate permissionâand likely a licensing fee.
The Consumer Gap: What About the Rest of Us?
Hereâs the problem: These laws primarily protect performers and celebrities. What about ordinary people?
Consider these everyday scenarios where your voice is captured:
- Voice assistants: Every âHey Siri,â âOK Google,â or âAlexaâ command- Customer service calls: âThis call may be recorded for quality assuranceâ- Video calls: Zoom, Teams, Google Meet recordings- Voice messages: Texts, voicemails, social media voice notes- Smart home devices: Doorbell cameras, home security systems- Car systems: Built-in voice navigation and calling
Tech companies have collected billions of hours of human speech to train AI models. Buried in the terms of service you quickly scrolled past: language allowing companies to use your interactions to âimprove products and services.â
The uncomfortable truth: Youâve likely already âconsentedâ to your voice being used for AI trainingâeven though you had no idea thatâs what you were agreeing to.
Why Greene can sue but you probably canât:
- Greene is famous enough that people recognize the voice specifically as his- Under Midler v. Ford, voice rights require proving distinctiveness and recognition- If your voice was mixed into training data with millions of others, proving it was used is nearly impossible- Even if you could prove it, you may have contractually waived your rights in some forgotten TOS agreement
This is the regulatory gap: Californiaâs new laws protect professional performers, but offer little recourse for the average person whose voice ends up in an AI training dataset.
The âPaid Actorâ Defense and What It Really Means
Googleâs response to Greeneâs lawsuit is straightforward: âThe sound of the male voice in NotebookLMâs Audio Overviews is based on a paid professional actor Google hired.â
This defense, if true, would be entirely legitimate. Companies have every right to hire voice actors and use their voices in AI systemsâprovided they do so with proper contracts and compensation.
But it raises important follow-up questions:
Who is this actor? Google hasnât identified them publicly (likely due to privacy concerns and to avoid harassment).
What did they agree to? Did their contract fully disclose that their voice would be used to create an AI that could generate unlimited new content? Were they compensated appropriately for that scope of use?
Are they comfortable with the outcome? If people consistently mistake the AI voice for David Greeneâs, that might concern the actual voice actor whose work is being misidentified.
The voice acting industry context:
The voice acting profession has been significantly disrupted by AI. Many actors signed contracts years agoâbefore AI voice synthesis was viableâthat granted broad rights to their recordings. Some have since discovered their voices being used in AI applications without additional compensation or approval beyond their original session fee.
This has led to industry-wide concerns about whether traditional voice work contracts adequately address AI use, and whether actors fully understood what they were agreeing to before the technology existed.
Californiaâs AB 2602 was designed specifically to address this issue going forward, requiring contracts to spell out digital replica uses explicitly.
The âarchetypal voiceâ question:
Adam Eisgrau, AI Copyright Policy Director at the Chamber of Progress, outlined the core legal question: âIf a California jury finds that the voice of NotebookLM is fully Mr. Greeneâs, he may win. If they find that itâs got attributes he also possesses, but is fundamentally an archetypal anchorpersonâs tone and delivery it learned from a large dataset, he may not.â
In other words: Is there such a thing as a âgeneric warm male podcaster voiceâ?
Think about it: Professional broadcasters are often trained to speak in similar ways. Clear enunciation. Moderate pace. Warm but authoritative tone. Minimal regional accent. These are industry standards, not individual trademarks.
If an AI learns these general qualities from training data, is that different from a voice acting student learning them in class?
The legitimate AI use case:
Itâs worth acknowledging: Voice AI has valuable, non-exploitative applications:
- Accessibility tools that give voice to people whoâve lost their speech- Language learning apps that provide natural conversation practice- Audiobook narration that makes more content accessible- Customer service systems that can handle routine queries 24/7
The technology itself isnât the villain. The question is whether itâs being developed and deployed ethically, with proper consent and compensation for the humans whose voices make it possible.
What This Means for You: Why Every Voice Assistant User Should Pay Attention
David Greene has the resources and recognition to fight Google in court. Most of us donât. But his case illuminates risks we all face in our daily lives:
1. Your Voice Is Already CapturedâMore Than You Realize
Every interaction with voice-enabled technology creates data that companies store and analyze:
Daily voice capture:
- Those âHey Siriâ wake-word samples- Google Assistant queries (âWhatâs the weather?â)- Alexa commands (âPlay musicâ)- Dictated text messages- Voice-to-text searches- Customer service calls- Smart doorbell conversations- Car navigation voice commands
Where it goes:
- Stored on company servers (often indefinitely unless you manually delete)- Used to âimprove servicesâ (which may include AI training)- Sometimes reviewed by human contractors for quality assurance- Potentially shared with third-party partners under broad data-sharing agreements
The scale: Amazon alone processes billions of Alexa voice requests per year. Google Assistant handles similar volumes. Thatâs an enormous reservoir of human voice data, most of it collected from ordinary people who never considered how it might be used.
2. âConsentâ Was Never Meaningful
Remember clicking âI Agreeâ on those terms of service?
What you probably thought you agreed to: âTheyâll store my voice commands to make the service work better.â
What you may have actually agreed to: Language like this (from actual Google Terms):
- âWe use information we collect⌠to provide, maintain, protect and improve our services, to develop new onesâŚâ- âWe may use automated systems to analyze your contentâŚâ
The bait-and-switch: These terms were written to be legally broad enough to cover future usesâincluding AI trainingâthat consumers couldnât have anticipated when they agreed.
Courts are beginning to question whether this constitutes meaningful informed consent. But for now, youâve likely already signed away rights you didnât know you were giving up.
3. Detection Is Nearly Impossible Without Resources
Even with David Greeneâs resources, expert analysis could only reach 53-60% confidence. For the average person:
The detection problem:
- You probably wouldnât even know your voice was cloned (Who systematically listens to new AI voices to check?)- If you suspected it, youâd need expensive forensic analysis- Even forensics may not provide conclusive proof- The burden of proof is entirely on you, not the company
The asymmetry:
- AI companies know exactly what training data they used- They have no legal obligation to tell you- They can claim trade secret protection to avoid disclosure- You have no way to audit their training datasets
4. The Law Is Playing Catch-UpâAnd May Never Catch Up
Whatâs protected now:
- Professional performers in California (AB 2602, AB 1836)- Celebrity voices under common law right of publicity- Deceased personsâ voices (in California, for 70 years)
Whatâs NOT clearly protected:
- Ordinary peopleâs voices used in training data- âInspired byâ or âin the style ofâ voices that arenât direct copies- Voices captured with broad TOS consent- Aggregate use of many voices together
The regulatory vacuum:
- Federal legislation has been proposed but not enacted- Most states have no AI-specific voice protection laws- International coordination is minimal- Technology is advancing faster than legal frameworks
Reality check: By the time laws catch up, AI voice technology may be so ubiquitous that retroactive protection is meaningless. The voices will already be in the models.
Protecting Your Voice in the Age of AI: Practical Steps You Can Take Today
You canât completely prevent voice capture in the modern world. But you can be more deliberate about controlling your voice data:
1. Audit Your Voice Assistant Settings (Do This Now)
Google Assistant:
- Visit
myactivity.google.com/myactivity- Click âVoice & Audio activityâ- Delete past recordings and turn off future storage if desired- Settings â Google Assistant â disable âVoice Matchâ if you donât need hands-free activation
Amazon Alexa:
- Alexa app â Settings â Alexa Privacy â Review Voice History- Enable âDonât save recordingsâ or set auto-delete to 3 months- Turn off âHelp Improve Amazon Servicesâ to opt out of human review
Apple Siri:
- Settings â Siri & Search â disable âListen for âHey Siriââ- Settings â Privacy â Analytics â disable âShare Siri & Dictationâ- Settings â Siri & Search â Siri History â Delete Siri & Dictation History
2. Read the Fine Print (Especially for These)
Before participating in voice-enabled services, look for these red flags in privacy policies:
Warning phrases:
- âWe may use your data to train machine learning modelsâ- âWe use your interactions to improve our services and develop new featuresâ- âWe may share anonymized voice data with third partiesâ
High-risk situations:
- Voice-enabled surveys or market research- New AI apps requesting microphone access- âFreeâ transcription or voice-to-text services- Beta testing programs for voice features
Questions to ask:
- Will my voice recordings be used for AI training?- Can I opt out of AI training while still using the service?- How long are recordings stored?- Will you notify me if this policy changes?
3. Minimize Your Voice Data Footprint
Simple actions:
- Use push-to-talk instead of âalways listeningâ mode- Disable voice features you donât actually use- Type instead of dictate when practical- Use wired headphones for private calls (many wireless earbuds send data to manufacturers)- Decline voice-based customer service when you have the option to chat or email
For video calls:
- Ask before recording; if someone else is recording, your voice enters their data ecosystem- Use end-to-end encrypted platforms when discussing sensitive topics (Signal, FaceTime)- Be cautious about Zoom AI features like automatic transcription or meeting summaries
4. Document Your Voice (If You Use It Professionally)
If youâre a podcaster, voice actor, content creator, or public speaker:
- Maintain dated recordings of your natural speaking voice- Document your vocal characteristics (pitch range, speaking pace, signature phrases)- Register copyrighted works that feature your voice- Consider this evidence if you ever need to prove voice misappropriation
5. Support Stronger Privacy Laws
The Greene case may influence pending federal legislation. Consumer advocacy matters:
- Contact your representatives about AI voice protection laws- Support organizations advocating for digital rights (EFF, Consumer Reports, EPIC)- Comment on proposed regulations when public comment periods open- Share information about voice privacy with your networks
Pending federal legislation to watch:
- NO FAKES Act (Nurture Originals, Foster Art, and Keep Entertainment Safe)- Various AI transparency and consent bills in committee
6. Know Your Rights (State-by-State)
If you live in California:
- You have rights under CCPA to request what voice data companies have collected about you- You can demand deletion of your data- If youâre a performer, AB 2602 gives you contract protections
If you live elsewhere:
- Check if your state has biometric privacy laws (Illinois, Texas, Washington have strong protections)- Voice data is often classified as âbiometric informationâ under these laws- You may have the right to sue for unauthorized biometric data collection
7. Assume Recording in Professional Contexts
Be especially cautious when:
- Participating in customer service calls (âthis call may be monitoredâŚâ)- Speaking at recorded events or webinars- Being interviewed for podcasts or video content- Using work-provided devices or platforms (your employer likely owns the data)- Participating in beta tests for AI features
Ask explicitly:
- âWill this recording be used for AI training?â- âCan I opt out of AI training while still participating?â- âWho has access to this voice data?â
The uncomfortable reality: Once your voice enters a dataset, itâs nearly impossible to remove. Prevention is your only practical protection.
The Bigger Picture: This Is About More Than One Lawsuit
David Greene doesnât describe himself as an anti-AI activist. âIâm not some crazy anti-AI activist,â he told reporters. âItâs just been a very weird experience.â
Heâs not asking for innovation to stop. Heâs asking for something that should be simple: âGoogle should have asked permission.â
Thatâs the heart of this caseâand the heart of the privacy challenge AI presents.
The race to train AI has created an âask forgiveness, not permissionâ culture:
- Companies harvest human data at massive scale- Training datasets are kept secret- Consent is buried in incomprehensible terms of service- Compensation is rarely offered to the people whose data powers the AI- By the time anyone objects, the models are already trained and deployed
This pattern extends beyond voices. It includes writing, art, photography, code, personal communicationsâthe sum total of human creative and communicative output, absorbed into systems that can now generate convincing imitations.
The technology is remarkable. The ethics are murky. The laws are scrambling to keep up.
Whatâs Really at Stake
Whether Greene wins or loses, his lawsuit forces a conversation we desperately need to have:
In a world where AI can clone your voice, your face, your writing styleâwhat rights do you have to your own identity?
Your voice is not just sound. Itâs you:
- Biometric data as unique as your fingerprint, carrying patterns developed over a lifetime- How your children recognize you on the phone before you say your name- How your partner knows your mood from your tone- How the world experiences who you are
Should tech companies be able to capture it, synthesize it, and profit from itâwithout even asking? Without compensating you? Without giving you any control over how itâs used?
David Greene says no. The courts will decide his case. But the question it raises belongs to all of us.
Why This Lawsuit Matters for Every Consumer
Even if youâre not David Greene, this case could set precedents that affect your rights:
If Greene wins:
- Courts may establish clearer standards for what constitutes voice theft- AI companies may be required to be more transparent about training data- Discovery could reveal how major tech companies actually build their voice AI- Other individuals may have legal grounds to challenge voice cloning
If Greene loses:
- It may signal that AI voice synthesis is too different from traditional voice impersonation for existing laws to cover- The burden of proof may be too high for most people to ever successfully challenge voice cloning- It could embolden AI companies to be even less cautious about whose voices they use
Either way:
- The case will generate public awareness about voice data privacy- It may spur stronger legislation at the state and federal level- It puts pressure on AI companies to develop more ethical practices- It reminds us all that we have a voice (pun intended) in how this technology develops
Youâre Not Powerless
Yes, the technology is already here. Yes, your voice data is probably already out there. Yes, the legal protections are inadequate.
But consumer awareness and action still matter:
The Scarlett Johansson incident showed that public pressure worksâOpenAI pulled the âSkyâ voice within days. Companies care about their reputation. Legislators respond to constituent concerns. The future of voice AI isnât written yet.
What you can do:
- Take the practical steps outlined in this article to minimize your voice data footprint- Support stronger privacy legislation- Demand transparency from companies about AI training data- Make informed choices about which voice-enabled services you use- Share information with others who may not realize these risks
Technology doesnât develop on its own. It reflects the choices companies make and the standards we demand.
David Greene is standing up for the principle that your voice is yours. Whether youâre a famous broadcaster or someone who just wants to use Alexa without feeding an AI training dataset, that principle matters.
The question isnât whether AI voice technology will existâit already does. The question is whether it will develop with respect for human rights and dignity, or whether those will be treated as obstacles to innovation.
Thatâs a question worth raising your voice about.
What Comes Next
The Greene v. Google case is in early stages. Discoveryâwhere Google might be compelled to reveal how NotebookLMâs voices were actually createdâcould prove decisive. If the case proceeds to trial, it could set precedent for how courts evaluate AI voice similarity claims.
Meanwhile, watch for:
- Federal legislation: Multiple bills addressing AI and creative rights are pending in Congress- Industry self-regulation: After the Johansson incident, some AI companies published voice ethics guidelines- More lawsuits: Greene likely wonât be the last person to recognize themselves in an AI voice
The age of synthetic media is here. The question is whether weâll shape it deliberatelyâor let it shape us.
Have you encountered AI voices that sounded familiar? Concerned about your voice data? Share your thoughts at privacy@myprivacy.blog.