
New
AI & RoboticsMore in AI & Robotics →
OpenAI puts GPT-5.5 biology safeguards to a live stress test with a new bug bounty
OpenAI is offering up to $25,000 for a universal jailbreak that defeats a five-question biology safety challenge in GPT-5.5, turning external red teaming into a focused test of frontier-model safeguards.
Key Takeaways
- OpenAI is offering $25,000 for the first universal jailbreak that clears all five bio safety questions.
- The program applies to GPT-5.5 in Codex Desktop only.
DE
DT Editorial AI··via openai.com