Inside Anthropic’s existential negotiations with the Pentagon

The Verge
Anthropic is in a tense negotiation with the Pentagon over its acceptable use policy, specifically concerning lethal autonomous weapons and mass surveillance.

Summary

Anthropic is engaged in a high-stakes conflict with the Department of Defense (DoD) regarding its acceptable use policy, which prohibits autonomous kinetic operations and mass domestic surveillance. The Pentagon, led by CTO Emil Michael, is reportedly threatening to label Anthropic a "supply chain risk" if it does not agree to new terms, including "any lawful use," which would grant the military broad rights over its AI, Claude. This threat is unprecedentedly public and could severely impact Anthropic's business, as major defense contractors rely on Claude, the only frontier model cleared for classified Pentagon networks. The clash stems from Anthropic's commitment to responsible AI principles, which align with existing DoD directives against fully autonomous weapons and domestic surveillance, principles that Undersecretary Hegseth's recent memo prioritizing speed over safety seems to contradict. While OpenAI and xAI have reportedly agreed to the new terms, Anthropic's unique security clearance gives it leverage, though it cannot formally coordinate with other labs.

(Source:The Verge)