In 2025, Anthropic signed a contract with the Pentagon. The company was willing to let the U.S. military use Claude — its AI model — for a wide range of purposes. What Anthropic wasn’t willing to do was let the government use its technology to conduct mass surveillance of American citizens, or to power fully autonomous weapons systems.
That condition, which Anthropic considered non-negotiable from the start of the contract, ultimately led to one of the most remarkable government-versus-tech confrontations in recent memory.
The Demand and the Refusal
The Department of Defense wanted broad, unfettered access to Anthropic’s models across what it described as “all lawful purposes.” Anthropic pushed back. Its position was clear: Claude could help with logistics, research, analysis, and administrative tasks — but not domestic mass surveillance, and not autonomous weapons.
The two sides couldn’t reach agreement.
In early March 2026, the DOD escalated. It officially designated Anthropic a supply chain risk — a designation typically reserved for foreign adversaries and compromised vendors — and claimed the company threatened national security.
Read that again: an American AI company that refused to let its technology be used against American citizens was labeled a national security threat by the U.S. government.
The Lawsuit and the Injunction
Anthropic filed suit. On March 24, 2026, Judge Rita Lin of the Northern District of California granted a preliminary injunction blocking the DOD’s supply chain risk designation.
Her ruling was pointed. She wrote that the government’s action was “not designed to protect national security, but rather to punish Anthropic” for publicly disagreeing with its contracting position. That, she found, constituted First Amendment retaliation — the government using its power to punish a private company for protected speech.
The judge’s language was blunt: “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”
The Pentagon’s CTO publicly stated after the ruling that the ban “still stands” from the DOD’s perspective — a remarkable statement that suggested the executive branch intended to maintain its position regardless of the court order. The government then appealed, and on April 8, 2026, the appeals court declined to grant a temporary block of the lower court’s injunction.
What the DOD Actually Wanted
The dispute is important to understand precisely because of what the government was seeking, not just that it was seeking it.
Mass surveillance of Americans is not a hypothetical capability. The infrastructure for it exists: the NSA’s collection programs, commercial data brokers who sell location and behavioral data, and now AI models capable of analyzing billions of data points and identifying patterns, connections, and individuals at scale.
The government’s position was, essentially, that it should be able to point this capability at American citizens without meaningful limitation — and that any AI company that refused to participate in that was acting against the national interest.
Anthropic’s position was that this is precisely the kind of use case that shouldn’t be enabled, regardless of who’s asking.
The Electronic Frontier Foundation called out what it viewed as the deeper problem: privacy protections that depend on the ethical choices of individual corporations aren’t really protections at all. What Anthropic did here was admirable — but other AI companies may make different choices. Google, notably, expanded the Pentagon’s access to its AI systems after Anthropic’s refusal.
The Broader Pattern
The Anthropic-DOD conflict doesn’t exist in isolation. It sits alongside a broader push by the current administration to expand government AI capabilities with fewer restrictions:
- Congress has repeatedly failed to pass meaningful limits on warrantless AI-powered analysis of communications swept up under FISA’s Section 702 program.
- The White House released a National Policy Framework for AI in March 2026 that prioritizes American competitiveness over civil liberties protections.
- State-level AI surveillance bills have proliferated, with dozens of proposals raising concerns from privacy advocates.
The administration’s view appears to be that AI is a national security tool first — and that companies which decline to participate on the government’s terms are obstacles rather than principled actors.
Why This Story Is Underreported
The Anthropic-DOD fight received significant coverage in tech and legal circles when the injunction was granted in late March. By May, it had largely faded from public attention despite being unresolved — the appeals process continues, and the underlying policy dispute about what AI companies can refuse to do for the government has not been settled.
This matters for everyone, not just AI companies. The question of whether the government can compel AI companies to enable domestic surveillance — and what happens when they say no — will define the relationship between AI, privacy, and government power for years.
Anthropic drew a line. A federal judge backed them up. The government is still fighting.
Where this ends will affect what AI can be used to do to all of us.



