GPT-5.5 Bio Bug Bounty
Summary
OpenAI has announced a 'Bio Bug Bounty' program specifically targeting the GPT-5.5 model within the Codex Desktop environment. The initiative invites cybersecurity and biosecurity experts to identify a single, universal jailbreaking prompt capable of bypassing five designated bio-safety questions. Participants who successfully clear all five questions from a clean chat session are eligible for a $25,000 reward, with smaller prizes available for partial success. The program runs from April to July 2026, requires an application and NDA, and aims to strengthen the safety of advanced AI systems against potential biological threats.
(Source:OpenAI)