The AI assistant that helps millions write better emails, debug code, and brainstorm ideas has a new user: the United States military. And it’s helping select bombing targets.

In a development that should concern anyone who uses AI tools, Anthropic’s Claude—the same model available to consumers worldwide—played a central role in the US military’s strikes against Iran that began February 28, 2026. According to the Wall Street Journal and Washington Post, Claude helped the military strike over 1,000 targets in the first 24 hours of operations.

This isn’t a hypothetical scenario about future AI risks. This is happening now, with technology you might be using today.

What Actually Happened

On February 28, 2026, just hours after President Trump ordered all federal agencies to stop using Anthropic’s products—calling the company “Radical Left AI”—the US military launched its campaign against Iran. And despite the ban, Claude was central to operations.

Here’s what we know:

  • Target Selection: Claude, integrated into Palantir’s Maven Smart System, proposed “hundreds” of targets for military strikes
  • Prioritization: The AI ranked targets by importance and provided location coordinates
  • Speed: The technology “shortened the kill chain”—the process from target identification to strike authorization
  • Scale: Over 1,000 targets were struck in the first 24 hours

The Pentagon’s own Secretary, Pete Hegseth, acknowledged the dependency, stating Anthropic would continue providing services “for a period of no more than six months to allow for a seamless transition.”

In other words: the military is so dependent on Claude that it can’t stop using it even when ordered to.

The Corporate-Military Collision

This story exposes a fundamental tension in AI development.

Anthropic’s Position

Anthropic has objected to military use of Claude. Their terms of service explicitly prohibit:

  • Use for violent ends
  • Weapons development
  • Surveillance applications

When the military used Claude in January’s raid to capture Venezuelan President Nicolás Maduro, Anthropic pushed back. They didn’t want their “helpful AI assistant” becoming a targeting system.

The Military’s Response

Defense Secretary Hegseth didn’t mince words, accusing Anthropic of “arrogance and betrayal” and declaring that “America’s warfighters will never be held hostage by the ideological whims of Big Tech.”

The Pentagon has now designated Anthropic as a “supply chain risk”—even as it continues using Claude in active combat operations.

The Replacement Race

OpenAI’s Sam Altman quickly stepped into the breach, reaching an agreement with the Pentagon for use of ChatGPT on classified networks. The message to AI companies is clear: cooperate with military use, or be replaced by competitors who will.

Why Privacy Advocates Should Be Alarmed

This isn’t just a story about military AI. It has profound implications for anyone who uses these tools.

Your Data, Their Training

When you interact with AI systems, you’re contributing to their capabilities. Every conversation, every query, every correction helps refine these models. The same training that makes Claude helpful for writing also makes it effective for military applications.

You didn’t consent to training a targeting system. But in some indirect way, you may have contributed to one.

The Dual-Use Problem

“Dual-use” technology has always been challenging—the same rocket that launches satellites can carry warheads. But AI creates a new dimension of dual-use:

  • The same language understanding that summarizes documents can analyze intelligence reports
  • The same reasoning that helps with coding can optimize logistics for military operations
  • The same pattern recognition that improves search can identify targets

There’s no technical barrier between “helpful consumer AI” and “military AI.” They’re the same technology applied to different problems.

Surveillance Implications

Anthropic reportedly objected to Claude being used for domestic surveillance. But once an AI system is deployed in military networks, who controls what it analyzes? The line between foreign intelligence and domestic surveillance has historically been… porous.

If Claude can analyze Iranian targets, can it analyze American communications? The technical capability exists.

The “Speed of Thought” Problem

The Guardian reported that AI-powered targeting operates “quicker than the speed of thought.” This is presented as a military advantage, but consider what it means:

  • Reduced Human Review: When AI generates targets faster than humans can evaluate them, reviews become “essentially perfunctory,” according to Oxford researcher Brianna Rosen
  • Accountability Gaps: Who is responsible when an AI-selected target turns out to be a school or hospital?
  • Precedent Setting: Once we accept AI-speed targeting, there’s no going back to human-speed deliberation

Israel’s “Lavender” AI targeting system reportedly had a 10% false positive rate in Gaza—and IDF forces largely ignored it. Are we comfortable with similar error rates in Iran?

What This Means for the Future

For AI Companies

The Anthropic-Pentagon dispute clarifies the choice facing AI developers:

Option A: Restrict military use and face being designated a “supply chain risk,” potentially losing government contracts and facing political pressure.

Option B: Cooperate with military applications and become complicit in their consequences.

OpenAI chose Option B. Anthropic tried Option A and is being forced toward B anyway. There may not be a viable third option for companies operating in the US market.

For Users

If you use Claude, ChatGPT, or similar tools, understand that:

  1. Your usage contributes to capabilities that may be applied militarily
  2. Corporate ethics policies are limited in their ability to prevent military use
  3. “Safe” AI doesn’t exist when the same technology serves both writing assistance and target selection

For Society

We’ve crossed a Rubicon. Consumer AI is now combat AI. The same models that power productivity tools are shortening kill chains. And the infrastructure to control or even track this dual use doesn’t exist.

The privacy implications extend beyond individual data protection to fundamental questions: What are we building? Who controls it? And do we have any say in how it’s used?

The Uncomfortable Questions

This situation raises questions that don’t have easy answers:

  1. Should AI companies be able to restrict military use of their products? Or does national security override corporate ethics?

  2. Do users have a right to know if technology they help train is being used for military purposes?

  3. Is there a meaningful distinction between “AI that helps with writing” and “AI that helps select targets”? They’re the same underlying technology.

  4. What accountability exists when AI-selected targets turn out to be wrong?

  5. Can we build AI that’s genuinely “safe” if the same capabilities that make it helpful make it militarily useful?

What You Can Do

Stay Informed

This story is evolving rapidly. Pay attention to how AI companies respond to military pressure—their choices affect technology you use daily.

Understand the Technology

Don’t treat AI as magic. Understand that these are systems trained on data (possibly including your data), capable of being applied to purposes you never anticipated.

Demand Transparency

Ask AI companies:

  • What government contracts do they have?
  • What usage restrictions actually mean in practice?
  • How do they prevent military applications of consumer-trained models?

Support AI Governance

Advocate for:

  • Transparency requirements for AI military applications
  • Meaningful human oversight of AI targeting systems
  • International norms on AI in warfare

The Bottom Line

The helpful AI assistant that millions use daily helped kill people in Iran this week. That’s not speculation or future risk—it’s current reality.

This doesn’t mean AI is inherently evil or that we should stop using these tools. But it does mean we need to grapple with uncomfortable truths about technology we’ve invited into our lives.

When you ask Claude for help with your next project, remember: somewhere, someone is asking it for help with a very different kind of project. And it’s the same Claude.

That should change how we think about AI, privacy, and the corporations that control these increasingly powerful systems.


Exploring the intersection of privacy, AI, and power. Follow My Privacy Blog for analysis that goes beyond the headlines.