OpenAI is incentivizing security researchers with a $25,000 reward to find vulnerabilities in its new AI model, GPT-5.5, ...
OpenAI is incentivizing security researchers with a $25,000 reward to bypass the safety guardrails of its latest AI model, GPT-5.5, through a bio bug bounty programme.
OpenAI has unveiled GPT-5.5, calling it its most capable and intuitive AI model yet, alongside a $25,000 Bio Bug Bounty to ...
OpenAI has launched a restricted Bio Bug Bounty for its new GPT‑5.5 model, offering up to $25,000 for a single 'universal jailbreak' that can bypass all five questions in a biosafety challenge without ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results