How OpenAI caved to the Pentagon on AI surveillance

The Verge
OpenAI secured a Pentagon deal by agreeing to 'any lawful use,' which critics say allows for mass surveillance despite claims of maintaining safety red lines.

Summary

OpenAI CEO Sam Altman announced a new agreement with the Pentagon that he claimed upheld safety principles against mass surveillance and autonomous lethal weapons, contrasting with Anthropic's refusal to compromise on these 'red lines.' However, sources indicate that OpenAI's deal is significantly softer, hinging on the phrase "any lawful use," which effectively permits the military to use OpenAI's technology for activities deemed technically legal, including past and potential mass surveillance programs that rely on stretching legal definitions.

Critics, including former OpenAI policy research head Miles Brundage, suggest OpenAI "caved" and framed it as a victory, potentially harming Anthropic, which was subsequently labeled a supply-chain risk by the Pentagon. OpenAI spokesperson Kate Waters denied the agreement allows for bulk, open-ended surveillance of Americans, citing compliance with existing laws like the Fourth Amendment and FISA. However, legal experts note that these same authorities have been used to justify extensive surveillance operations revealed by Edward Snowden.

The agreement on lethal autonomous weapons is similarly weak; it only requires human control where law or policy mandates it, unlike Anthropic's push for a complete ban until the technology is deemed reliable. While OpenAI claims technical safeguards like classifiers and cloud-only deployment will enforce its red lines, sources argue these measures are insufficient to monitor compliance or prevent the technology from powering the 'autonomous kill chain' leading up to a strike, especially since the agreement defaults to allowing any use the government deems legal.

(Source:The Verge)