/dq/media/media_files/2026/03/01/iran-war-2026-03-01-17-40-43.png)
The ban story was always bigger than one company. It was about control: who sets the rules when a frontier model is already inside government systems.
Now, the Wall Street Journal reporting adds a blunt operational layer to that argument. It says US Central Command used Anthropic’s Claude during strikes on Iran, hours after Trump directed federal agencies to stop using Anthropic’s tools. The report also says Claude was used in a separate operation involving Venezuela’s Nicolas Maduro.
WHAT THE WSJ DETAIL REALLY SIGNALS
This is not simply “ban ignored”. It is what happens when policy meets integration. Once AI is embedded in workflows, contracts, and platforms, an executive order reads like a switch, but functions like a migration.
It may be recalled, Pentagon has demanded for a broader, unrestricted use, Anthropic’s refusal, and Trump’s sharp response, including a phase-out window for defence because the tech is already in use. The WSJ angle strengthens that core point: the state can signal a split quickly, but unwinding dependence takes time, planning, and substitutes that are not always ready.
The deeper issue is vendor leverage versus state authority. AI firms want enforceable guardrails for high-risk use. Defence wants operational freedom and continuity. When those positions clash, the pressure shifts to procurement actions, supply-chain labels, and courtroom battle
/dq/media/agency_attachments/UPxQAOdkwhCk8EYzqyvs.png)
Follow Us