OpenAI shares more details about its agreement with the Pentagon

TechCrunch
OpenAI detailed its multi-layered safeguards protecting against misuse in its Pentagon AI deployment deal, despite criticism.

Summary

Following criticism and the collapse of Anthropic's deal with the Pentagon, OpenAI announced its own agreement to deploy models in classified environments. CEO Sam Altman admitted the deal was "rushed." OpenAI published a blog post outlining three areas where its models cannot be used: mass domestic surveillance, autonomous weapon systems, and high-stakes automated decisions. The company claims its safeguards are multi-layered, involving retaining discretion over its safety stack, deployment via cloud, cleared personnel in the loop, and strong contractual protections, contrasting this with other companies relying primarily on usage policies. However, critics argue the deal still allows for domestic surveillance based on compliance with Executive Order 12333. OpenAI's head of national security partnerships, Katrina Mulligan, countered that deployment architecture, like limiting integration via cloud API, matters more than just contract language for preventing use in weapons systems.

(Source:TechCrunch)