Somewhere in Sihanoukville, Cambodia, a 24-year-old Uzbekistani woman named Angel records a selfie-style video for recruiters. She speaks fluent English, Chinese, Russian, and Turkish. She has one year of experience as an “AI model.” She arrived in the city that day and is ready to start work immediately.
Her job, according to a WIRED investigation published today, will involve sitting in front of a computer while deepfake software maps her face onto a fabricated persona — and making up to 100 video calls per day to victims of pig-butchering scams who want to verify that the person they’ve been talking to online is real.
Angel is not an outlier. She’s part of an emerging labor market that sits at the intersection of artificial intelligence, organized crime, and human trafficking — one that has profound implications not just for scam victims, but for anyone whose face, voice, or biometric data exists on the internet.
The AI Face Model Economy
WIRED’s review of dozens of Telegram channels revealed a structured recruitment pipeline for what the scam industry calls “AI face models” or “real face models.” Job ads posted across these channels seek young, multilingual, attractive people — overwhelmingly women in their early twenties — willing to travel to Cambodia and Southeast Asia for six-month contracts.
The job requirements read like a dystopian parody of a modeling agency listing:
- Approximately 100 to 150 video calls per day
- Working hours from 10 PM to 10 AM
- One full day off and four half days off per month
- “Filters may be used, but ensure the image is realistic. Wigs are prohibited.”
- The company will “retain your passport for visa and work permit management”
That last requirement — passport confiscation — is one of the primary mechanisms scam compound operators use to hold trafficking victims captive. It transforms what might look like a voluntary job into a trap.
Applicants, tracked by WIRED, by the Vietnamese nonprofit ChongLuaDao, and by anti-trafficking organization Humanity Research Consultancy, have been identified from Turkey, Russia, Ukraine, Belarus, and multiple Asian countries. Some request salaries up to $7,000 per month. Others make more telling requests about working conditions: one woman asked for her own room and that she “can go outside.” Another asked to “go home on day off” and for a “personal washing machine.”
These aren’t the requests of people expecting normal employment. They’re the negotiations of people who understand, at some level, what they’re walking into.
How It Works: Deepfakes as Trust Infrastructure
To understand why scam operations need human faces at all, you need to understand the vulnerability they’re exploiting — and why this represents a fundamental shift in how deepfakes threaten identity verification.
Pig-butchering scams — named for the practice of “fattening” a victim with attention before “slaughtering” them financially — have generated an estimated $12.4 billion in losses. They typically begin with a wrong-number text or a dating app connection. Over weeks or months, the scammer builds a relationship — romantic, friendly, or professional — before steering the victim toward fake cryptocurrency investment platforms.
The weak point in these operations has always been the video call. Victims eventually want to see who they’re talking to. Stolen photos can sustain a text-based relationship, but they collapse the moment someone asks to FaceTime.
Enter the AI face model. Using real-time deepfake face-swapping software — platforms like Haotian, which WIRED previously investigated in December 2025 — scam compounds can now map a real person’s face and expressions onto the fabricated persona that’s been chatting with the victim. The model sits in an “AI room” within the compound, takes the call, and the victim sees a live, responsive human face that matches the photos they’ve been sent.
It’s not perfect. Anti-fraud researcher Frank McKenna, who set up a video call between his mother and one of these operations, described the result as “kind of glitchy” with “other people in the room with her, so there’s echoing.” But it doesn’t need to be perfect. It just needs to be convincing enough that a victim who wants to believe crosses the threshold from suspicion to trust.
A month later, McKenna saw what appeared to be the same model posting a recruitment video — her contract had expired and she was looking for new work.
The Privacy Catastrophe Hiding in Plain Sight
The AI face model phenomenon is disturbing on its surface. But the privacy implications run far deeper than the immediate scam operation.
Your biometric data is the raw material
Every deepfake face-swap starts with training data — photographs and video of a real person. The models being recruited in Cambodia provide this in person, but the technology works with any sufficiently detailed facial imagery. Your social media photos, your video conference recordings, your LinkedIn headshot — all of it can serve as source material for face-swapping systems.
This isn’t hypothetical. It’s the logical extension of a problem that privacy researchers have been warning about for years: the more facial imagery you put online, the more raw material you provide for identity synthesis. The difference in 2026 is that the technology has crossed the threshold from “impressive demo” to “operationalized at industrial scale in actual criminal enterprises.”
Real-time face swapping breaks video verification
For years, video calls have been the gold standard for proving someone is who they claim to be. Banks use video KYC. Remote employers conduct video interviews. People verify online dating matches through FaceTime calls. All of these systems are now undermined.
When scam compounds can hire models specifically to defeat video verification at scale — 100 to 150 calls per day, per model, across potentially hundreds of models — the assumption that “if I can see them on video, they’re real” is dead. The implications for remote identity verification are severe.
This doesn’t just affect scam victims. It affects every system that relies on visual identity confirmation. Corporate authentication, legal depositions conducted remotely, telemedicine visits — any process that treats a live video feed as proof of identity is now operating on a broken assumption.
The consent paradox
Some of the AI models being recruited appear to be participating voluntarily — they’re posting application videos, listing their experience, and negotiating salaries. Others are clearly being trafficked. And the line between the two is deliberately blurred.
Ling Li, cofounder of the EOS collective which works with victims of the scam industry, told WIRED that even “voluntary” models face harsh treatment. “One European victim told us that he saw some Italian models in his compound, but he cannot tell if they are willingly or not because they were beaten in front of him. And also there is some sexual harassment.”
This creates an impossible consent framework. The models’ biometric data — their faces — is being used to commit fraud against thousands of victims. Even if a model initially consented to the work, they cannot meaningfully consent on behalf of the victims whose trust is being exploited through their likeness. And once their facial data has been captured by these operations, there’s no mechanism to revoke it.
The Scale of the Problem
The numbers from WIRED’s investigation suggest this isn’t a niche phenomenon. Hieu Minh Ngo, a cybercrime investigator at ChongLuaDao, identified around two dozen active Telegram channels with AI model job postings. Humanity Research Consultancy has tracked applicants flowing into “known scam hub cities” across Southeast Asia.
This sits within a broader crisis that continues to escalate. The United Nations estimates that 220,000 people are currently held in forced labor in scam compounds across Cambodia and Myanmar alone. These compounds generate approximately $43.8 billion annually. Recent crackdowns in Myanmar have resulted in mass arrests, but the operations adapt and relocate faster than enforcement can follow.
The AI model recruitment pipeline is the latest adaptation. As AI-powered scams scale faster than regulation, the scam industry is professionalizing its deepfake capabilities — moving from ad hoc face-swapping using stolen imagery to a dedicated workforce of human “deepfake interfaces.”
What Telegram Isn’t Doing
WIRED provided Telegram with a list of two dozen channels actively recruiting AI models. Telegram did not remove any of them.
A Telegram spokesperson told WIRED that “content that encourages or enables scams is explicitly forbidden” but that “there are legitimate reasons one might give their likeness, and so such content must be examined on a case-by-case basis.”
This response is difficult to take seriously when the channels in question contain posts explicitly referencing “love scam” as a job market and applicants describing their experience as “3 year as customer service (killer) of scamming platform crypto.” The term “killer” is scam compound jargon for someone who closes deals — i.e., convinces victims to send money.
Telegram’s reluctance to act on recruitment channels that openly use scam industry terminology represents a platform governance failure that directly enables human trafficking and financial fraud at scale.
The Privacy Defense Playbook
The emergence of AI face models demands a reassessment of how we think about identity, verification, and the biometric data we share online.
Limit your facial data exposure
Every high-resolution photo and video you post online is potential training data for face-swapping systems. This doesn’t mean you need to disappear from the internet, but it does mean thinking critically about what you share and where.
- Audit your social media privacy settings. Ensure photos are shared with intended audiences, not the public. Platforms like LinkedIn and Discord have specific settings worth reviewing.
- Be cautious with video content. Live streams, video podcasts, and conference recordings all provide high-quality facial data.
- Consider watermarking. Some emerging tools embed invisible watermarks in images that can survive deepfake processing, providing a forensic trail.
Don’t trust video alone
If someone you’ve met online wants to prove they’re real via video call, a single call is no longer sufficient verification. Look for:
- Inconsistencies around the face edges. Real-time deepfakes still struggle with hairlines, ears, and the boundary between face and background.
- Lighting mismatches. If the lighting on the face doesn’t match the lighting in the room, the face may be synthetic.
- Ask unexpected things. Request that they turn their head sharply, hold something up to their face, or cover part of their face with their hand. Current face-swapping technology handles these poorly.
- Verify through multiple independent channels. A video call alone proves nothing. Cross-reference with other forms of identification.
Support systemic change
The AI face model economy exists because current laws and platform policies haven’t caught up with the technology. Meaningful change requires:
- Criminalizing deepfake-for-fraud services. The tools enabling real-time face swapping for scam operations should face the same legal framework as other fraud instruments.
- Platform accountability. Telegram and other platforms hosting recruitment channels for scam operations should face consequences for inaction when provided with evidence.
- International cooperation. The INTERPOL Synergia operations demonstrate that cross-border enforcement is possible — but it needs to expand to target the deepfake supply chain, not just the scam infrastructure itself.
- Biometric data rights. People should have enforceable rights over how their facial data is used, including in AI training and face-swapping applications.
The Human Cost
It’s easy to get lost in the technology and forget that this story is fundamentally about people.
The victims who lose their life savings to pig-butchering scams enabled by deepfake video calls. The trafficking victims held in compounds, forced to run scams at gunpoint. And, increasingly, the AI models themselves — young people, mostly women, who may or may not fully understand what they’re signing up for, traveling to countries where their passports will be confiscated and their faces will be rented out to deceive strangers.
When one applicant listed her experience as “1 year as an AI model” and another described perfecting techniques to “make clients trust us,” they were describing a form of labor that exists nowhere in any legitimate economy. It’s a role created entirely by the convergence of advanced AI, organized crime, and the failure of platforms and governments to act.
Your face is no longer just your face. In 2026, it’s a commodity — one that scam operations are actively recruiting for, trafficking people to obtain, and deploying at industrial scale. The question isn’t whether this affects you. It’s whether you’ll recognize it when it does.
This article references reporting by WIRED, research from ChongLuaDao, Humanity Research Consultancy, and the EOS Collective.



