The AI assistant that helps millions write better emails, debug code, and brainstorm ideas has a new user: the United States military. And itâs helping select bombing targets.
In a development that should concern anyone who uses AI tools, Anthropicâs Claudeâthe same model available to consumers worldwideâplayed a central role in the US militaryâs strikes against Iran that began February 28, 2026. According to the Wall Street Journal and Washington Post, Claude helped the military strike over 1,000 targets in the first 24 hours of operations.
This isnât a hypothetical scenario about future AI risks. This is happening now, with technology you might be using today.
What Actually Happened
On February 28, 2026, just hours after President Trump ordered all federal agencies to stop using Anthropicâs productsâcalling the company âRadical Left AIââthe US military launched its campaign against Iran. And despite the ban, Claude was central to operations.
Hereâs what we know:
- Target Selection: Claude, integrated into Palantirâs Maven Smart System, proposed âhundredsâ of targets for military strikes
- Prioritization: The AI ranked targets by importance and provided location coordinates
- Speed: The technology âshortened the kill chainââthe process from target identification to strike authorization
- Scale: Over 1,000 targets were struck in the first 24 hours
The Pentagonâs own Secretary, Pete Hegseth, acknowledged the dependency, stating Anthropic would continue providing services âfor a period of no more than six months to allow for a seamless transition.â
In other words: the military is so dependent on Claude that it canât stop using it even when ordered to.
The Corporate-Military Collision
This story exposes a fundamental tension in AI development.
Anthropicâs Position
Anthropic has objected to military use of Claude. Their terms of service explicitly prohibit:
- Use for violent ends
- Weapons development
- Surveillance applications
When the military used Claude in Januaryâs raid to capture Venezuelan President NicolĂĄs Maduro, Anthropic pushed back. They didnât want their âhelpful AI assistantâ becoming a targeting system.
The Militaryâs Response
Defense Secretary Hegseth didnât mince words, accusing Anthropic of âarrogance and betrayalâ and declaring that âAmericaâs warfighters will never be held hostage by the ideological whims of Big Tech.â
The Pentagon has now designated Anthropic as a âsupply chain riskââeven as it continues using Claude in active combat operations.
The Replacement Race
OpenAIâs Sam Altman quickly stepped into the breach, reaching an agreement with the Pentagon for use of ChatGPT on classified networks. The message to AI companies is clear: cooperate with military use, or be replaced by competitors who will.
Why Privacy Advocates Should Be Alarmed
This isnât just a story about military AI. It has profound implications for anyone who uses these tools.
Your Data, Their Training
When you interact with AI systems, youâre contributing to their capabilities. Every conversation, every query, every correction helps refine these models. The same training that makes Claude helpful for writing also makes it effective for military applications.
You didnât consent to training a targeting system. But in some indirect way, you may have contributed to one.
The Dual-Use Problem
âDual-useâ technology has always been challengingâthe same rocket that launches satellites can carry warheads. But AI creates a new dimension of dual-use:
- The same language understanding that summarizes documents can analyze intelligence reports
- The same reasoning that helps with coding can optimize logistics for military operations
- The same pattern recognition that improves search can identify targets
Thereâs no technical barrier between âhelpful consumer AIâ and âmilitary AI.â Theyâre the same technology applied to different problems.
Surveillance Implications
Anthropic reportedly objected to Claude being used for domestic surveillance. But once an AI system is deployed in military networks, who controls what it analyzes? The line between foreign intelligence and domestic surveillance has historically been⌠porous.
If Claude can analyze Iranian targets, can it analyze American communications? The technical capability exists.
The âSpeed of Thoughtâ Problem
The Guardian reported that AI-powered targeting operates âquicker than the speed of thought.â This is presented as a military advantage, but consider what it means:
- Reduced Human Review: When AI generates targets faster than humans can evaluate them, reviews become âessentially perfunctory,â according to Oxford researcher Brianna Rosen
- Accountability Gaps: Who is responsible when an AI-selected target turns out to be a school or hospital?
- Precedent Setting: Once we accept AI-speed targeting, thereâs no going back to human-speed deliberation
Israelâs âLavenderâ AI targeting system reportedly had a 10% false positive rate in Gazaâand IDF forces largely ignored it. Are we comfortable with similar error rates in Iran?
What This Means for the Future
For AI Companies
The Anthropic-Pentagon dispute clarifies the choice facing AI developers:
Option A: Restrict military use and face being designated a âsupply chain risk,â potentially losing government contracts and facing political pressure.
Option B: Cooperate with military applications and become complicit in their consequences.
OpenAI chose Option B. Anthropic tried Option A and is being forced toward B anyway. There may not be a viable third option for companies operating in the US market.
For Users
If you use Claude, ChatGPT, or similar tools, understand that:
- Your usage contributes to capabilities that may be applied militarily
- Corporate ethics policies are limited in their ability to prevent military use
- âSafeâ AI doesnât exist when the same technology serves both writing assistance and target selection
For Society
Weâve crossed a Rubicon. Consumer AI is now combat AI. The same models that power productivity tools are shortening kill chains. And the infrastructure to control or even track this dual use doesnât exist.
The privacy implications extend beyond individual data protection to fundamental questions: What are we building? Who controls it? And do we have any say in how itâs used?
The Uncomfortable Questions
This situation raises questions that donât have easy answers:
-
Should AI companies be able to restrict military use of their products? Or does national security override corporate ethics?
-
Do users have a right to know if technology they help train is being used for military purposes?
-
Is there a meaningful distinction between âAI that helps with writingâ and âAI that helps select targetsâ? Theyâre the same underlying technology.
-
What accountability exists when AI-selected targets turn out to be wrong?
-
Can we build AI thatâs genuinely âsafeâ if the same capabilities that make it helpful make it militarily useful?
What You Can Do
Stay Informed
This story is evolving rapidly. Pay attention to how AI companies respond to military pressureâtheir choices affect technology you use daily.
Understand the Technology
Donât treat AI as magic. Understand that these are systems trained on data (possibly including your data), capable of being applied to purposes you never anticipated.
Demand Transparency
Ask AI companies:
- What government contracts do they have?
- What usage restrictions actually mean in practice?
- How do they prevent military applications of consumer-trained models?
Support AI Governance
Advocate for:
- Transparency requirements for AI military applications
- Meaningful human oversight of AI targeting systems
- International norms on AI in warfare
The Bottom Line
The helpful AI assistant that millions use daily helped kill people in Iran this week. Thatâs not speculation or future riskâitâs current reality.
This doesnât mean AI is inherently evil or that we should stop using these tools. But it does mean we need to grapple with uncomfortable truths about technology weâve invited into our lives.
When you ask Claude for help with your next project, remember: somewhere, someone is asking it for help with a very different kind of project. And itâs the same Claude.
That should change how we think about AI, privacy, and the corporations that control these increasingly powerful systems.
Exploring the intersection of privacy, AI, and power. Follow My Privacy Blog for analysis that goes beyond the headlines.



