The US military is still using Claude — but defense-tech clients are fleeing
Summary
Anthropic finds itself in an awkward position following a dispute with the Department of Defense (DoD): its AI models are actively being used for targeting decisions in the ongoing conflict between the U.S. and Iran, even as many defense industry clients are abandoning the technology. President Trump directed civilian agencies to stop using Anthropic products, granting the DoD six months to wind down operations, but a surprise attack on Tehran occurred before this directive could be fully executed. Secretary of Defense Pete Hegseth has threatened to designate Anthropic as a supply-chain risk, but no official action has been taken, meaning there are currently no legal barriers to its military use. Reports indicate Anthropic's systems, working with Palantir's Maven system, suggested hundreds of targets and prioritized them in real-time for Pentagon officials planning strikes. Concurrently, major defense contractors like Lockheed Martin and numerous subcontractors are actively replacing Anthropic's models with competitors' solutions. The situation leaves a leading AI lab being partitioned out of military tech even as its tools are deployed in an active war zone.
(Source:TechCrunch)