Did the US use Claude in Iran strikes hours after Trump’s ban?

Hours after Trump ordered agencies to stop using Claude, the Wall Street Journal reports US forces still used it in Iran operations, exposing how embedded AI is.

author-image
DQI Bureau
New Update
Claude and Iran
Listen to this article
0.75x1x1.5x
00:00/ 00:00

The ban story was always bigger than one company. It was about control: who sets the rules when a frontier model is already inside government systems.

Advertisment

Now, the Wall Street Journal reporting adds a blunt operational layer to that argument. It says US Central Command used Anthropic’s Claude during strikes on Iran, hours after Trump directed federal agencies to stop using Anthropic’s tools. The report also says Claude was used in a separate operation involving Venezuela’s Nicolas Maduro.

WHAT THE WSJ DETAIL REALLY SIGNALS

This is not simply “ban ignored”. It is what happens when policy meets integration. Once AI is embedded in workflows, contracts, and platforms, an executive order reads like a switch, but functions like a migration.

It may be recalled, Pentagon has demanded for a broader, unrestricted use, Anthropic’s refusal, and Trump’s sharp response, including a phase-out window for defence because the tech is already in use. The WSJ angle strengthens that core point: the state can signal a split quickly, but unwinding dependence takes time, planning, and substitutes that are not always ready.

Advertisment

The deeper issue is vendor leverage versus state authority. AI firms want enforceable guardrails for high-risk use. Defence wants operational freedom and continuity. When those positions clash, the pressure shifts to procurement actions, supply-chain labels, and courtroom battle

Anthropic