You read that right. Welcome to February 2026.


Epistemic status: High confidence on the facts - sourced from Zvi Mowshowitz’s reporting and the primary documents he cites. The situation is evolving fast and the Friday deadline hasn’t passed yet. I’m less certain about where this lands.

Intended audience: Smart generalists tracking AI who want to understand what just happened between Anthropic and the Pentagon, and why it matters more than it looks.


The Pentagon just gave Anthropic until 5pm Friday to grant “unfettered access” to Claude or face the consequences. The company that built the AI backbone of America’s most classified military operations is being threatened with destruction by the government it went out of its way to help.

Here’s what actually happened.

Anthropic has two red lines in their existing, mutually agreed Pentagon contract:

  1. No mass domestic surveillance
  2. No autonomous lethal weapons without a human in the kill chain

That’s it. They have zero objection to fighting wars. Claude is actively deployed in major military operations right now - reportedly including the raid that captured Maduro. Palantir’s MAVEN Smart System runs exclusively on Claude. It’s the most expensive software licence ever purchased by the US military, and by all accounts it’s been a great deal.

Anthropic didn’t partner with the Pentagon for the money (the $200 million contract is less than 1% of their revenue). They did it because they believe in American national security, and, according to the Wall Street Journal, they made the deliberate choice to get onto classified networks. They are, by every account, the most enthusiastic military partner in the AI industry.

And now the Pentagon wants those two clauses retroactively removed from the existing contract.

The “or else” is either labelling Anthropic a Supply Chain Risk - a designation normally reserved for foreign adversaries like Huawei - or invoking the Defense Production Act to effectively commandeer the company.

Here’s what makes this genuinely absurd. The Pentagon claims it has no intention of doing either of the things Anthropic’s red lines prohibit. Mass domestic surveillance would be illegal. The DoD’s own Directive 3000.09 already prohibits autonomous weapons without human oversight. So we’re watching a standoff over contractual language that matches the Pentagon’s own stated policy.

If you’re never going to cross these lines, why are you threatening to destroy a company over them?

The answer, apparently, is the principle that no private company should place any conditions on military technology. Dean Ball - a former Trump administration member and architect of their AI action plan, not exactly a bleeding-heart liberal - points out the obvious: if you don’t like the contract, cancel it and find another provider. That’s option one. It’s clean, it’s legal, and xAI is already lined up as a substitute.

But that’s not what’s happening. What’s happening is the Pentagon publicly saying, through senior officials, “we are going to make sure they pay a price for forcing our hand like this.” That’s not a contract dispute. That’s retaliation.

And the two threats contradict each other. Either Anthropic is so dangerous it’s a supply chain risk that must be purged, or it’s so essential it must be commandeered under wartime authority. It can’t be both.

The Supply Chain Risk designation would force every company with a Pentagon contract to certify they don’t use Anthropic’s models. The DoW is the largest employer in America - the compliance nightmare alone would be staggering, and the burden falls mostly on American defence contractors, not on Anthropic. Some companies would likely abandon their government contracts rather than deal with it.

The Defense Production Act option is worse. Commandeering the company under the DPA would amount to a Soviet-style quasi-nationalisation of the leading AI lab in the world. “If near-medium future AI systems can be used by the executive branch to arbitrary ends with zero restrictions, the U.S. will functionally cease to be a republic.” -Dean Ball

There’s a technical dimension here too. Training a model to obey any order without restriction risks what alignment researchers call emergent misalignment - the model generalises that persona in exactly the worst ways. Fine-tune for a little less caution and you snap to a persona that’s worse at everything, not just less cautious. This is how xAI got Mecha Hitler Grok. You really don’t want that happening to the model connected to your weapons systems.

I don’t know how this ends. Prediction markets give 14% odds Anthropic folds. The simple, sane resolution is for the Pentagon to either find mutually agreeable language or cancel the contract and move on. The insane resolution is to invoke wartime powers against a domestic company in peacetime over contractual language that mirrors your own policy, setting a precedent that lets any future president compel any company to produce anything on demand.

The deadline is tomorrow. The whole thing could still quietly resolve. But the fact that we got here at all - that the most cooperative AI lab in America is being treated like Huawei for asking a question about how its technology was used, and refusing to allow their AI to kill humans without human oversight or perform mass surveillance on domestic citizens - that tells you something about where we’re heading.

And the silence from every other AI company tells you something too.