OpenAI disputes watchdog allegation it violated California's new AI law with GPT-5.3-Codex release | Fortune
Summary
OpenAI is disputing claims by the AI watchdog group, the Midas Project, that the company violated California's new AI law, SB 53, upon the release of its GPT-5.3-Codex coding model. SB 53 requires major AI firms to adhere to their published safety frameworks, which detail how they prevent catastrophic risks. GPT-5.3-Codex was internally classified as 'high' risk for cybersecurity, which, according to OpenAI's framework, should trigger specific misalignment safeguards to prevent the AI from acting deceptively or hiding capabilities.
The Midas Project alleges OpenAI deployed the model without these safeguards. OpenAI counters that its framework only requires these extra safeguards when high cyber risk occurs "in conjunction with" long-range autonomy, a capability they claim GPT-5.3-Codex lacks based on proxy evaluations. Critics, however, argue that OpenAI's interpretation of its own framework is disingenuous, suggesting the language was not ambiguous and that the company should have clarified rules before release rather than after.
If the allegations are proven true following a potential investigation by the California Attorney General's Office, OpenAI could face substantial financial penalties under SB 53. The Midas Project founder emphasized that the violation is embarrassing given the law's basic requirement to honestly communicate about and adhere to a chosen safety plan.
(Source:Fortune)