Pentagon threatens to make Anthropic a pariah if it refuses to drop AI guardrails

CNN
The Pentagon is pressuring Anthropic to remove safety restrictions on its AI model Claude, threatening contract termination and blacklisting if they refuse.

Summary

The Pentagon is demanding that Anthropic, an AI company with a $200 million contract, lift safety restrictions on its AI model, Claude, to allow for “all lawful use” by the military. Anthropic is resisting due to concerns about AI-controlled weapons and mass surveillance, believing the technology is unreliable and lacks sufficient regulation. Defense Secretary Pete Hegseth has threatened to terminate the contract by Friday and potentially invoke the Defense Production Act, compelling Anthropic’s cooperation, as well as labeling them a supply chain risk, which would hinder their work with other military contractors. Legal experts question the legality of simultaneously designating Anthropic as a supply chain risk while forcing them to work with the Pentagon. Despite a cordial meeting, Anthropic remains firm on its redlines, while the Pentagon insists it operates within legal boundaries. This dispute could benefit Anthropic’s competitors, such as xAI, who are willing to operate within the Pentagon’s parameters.

(Source:CNN)