AI vs. the Pentagon: killer robots, mass surveillance, and red lines

The Verge
Anthropic is resisting Pentagon demands to remove safety guardrails from its AI models for military use, including lethal autonomous weapons.

Summary

Anthropic is engaged in a major dispute with the Pentagon after refusing new contract terms that would require the company to remove safety guardrails from its AI models, thereby allowing the military unrestricted use, including for mass surveillance of Americans and fully autonomous lethal weapons ("killer robots"). Pentagon CTO Emil Michael is reportedly threatening to label Anthropic a "supply chain risk" if it does not comply, a designation typically reserved for national security threats. While rivals OpenAI and xAI have reportedly agreed to these terms, Anthropic CEO Dario Amodei remains firm on the company's red lines, even after meeting with Defense Secretary Pete Hegseth, stating they cannot "in good conscience accede to their request." The conflict highlights broader industry concerns about the ethical implications of building AI tools used for surveillance and warfare.

(Source:The Verge)