US military used Anthropic’s AI model Claude in Venezuela raid, report says

the Guardian
The US military reportedly used Anthropic's AI model Claude during an operation in Venezuela, raising concerns over policy compliance.

Summary

The Wall Street Journal reported that the US military utilized Anthropic's AI model, Claude, during an operation in Venezuela aimed at kidnapping Nicolás Maduro, marking a high-profile instance of the Department of Defense employing artificial intelligence. Venezuela's defense ministry claims the raid involved bombings in Caracas and resulted in 83 deaths. Anthropic's terms of use explicitly prohibit using Claude for violent ends, weapons development, or surveillance. While Anthropic declined to confirm or deny the usage, they stressed compliance with their policies. Sources suggest Claude was deployed via Anthropic's partnership with Palantir Technologies. This development occurs as militaries globally increase AI integration, prompting warnings from critics about targeting errors and autonomous weapons, while AI developers like Anthropic's CEO, Dario Amodei, advocate for regulation, contrasting with the US defense department's stated need for AI models that support warfare.

(Source:the Guardian)