OpenAI invites selected researchers to stress-test GPT-5.5 for biological safety vulnerabilities through controlled adversarial testing, aiming to identify jailbreak risks, improve safeguards, and reinforce responsible AI deployment against emerging misuse scenarios in sensitive domains.
OpenAI has introduced a specialised security initiative titled the GPT-5.5 Bio Bug Bounty programme, designed to evaluate and strengthen the biological safety guardrails of its latest AI model. The initiative brings together cybersecurity specialists, biosecurity researchers, and AI red teaming experts to proactively identify vulnerabilities that could potentially be exploited for harmful purposes.
The programme reflects growing industry concerns around the misuse of advanced AI systems in sensitive scientific domains. As generative models become more capable, ensuring that they do not generate or assist in unsafe biological content has become a critical focus area for developers and regulators alike.
Testing for universal jailbreak vulnerabilities
A central objective of the initiative is the detection of what researchers refer to as a “universal jailbreak.” This involves crafting a single carefully designed prompt capable of bypassing the model’s internal safety mechanisms and ethical filters. Participants are tasked with attempting to make GPT-5.5 respond successfully to a structured set of five biosafety-related questions using only one prompt.
The testing must be conducted in a clean conversation environment without triggering automated moderation alerts or backend safety systems. OpenAI has restricted this evaluation to GPT-5.5 operating within the Codex Desktop environment, ensuring controlled and secure testing conditions.
Rewards, access rules and security controls
The programme offers a top reward of $25,000 for the first participant who successfully completes the full challenge. Additional discretionary rewards may be granted for partial findings that provide meaningful insights into model vulnerabilities.
Applications for participation opened on April 23, 2026 and will be accepted on a rolling basis until June 22, 2026. The active testing phase will run from April 28 to July 27, 2026. OpenAI is also directly inviting a curated group of trusted biosecurity red teamers while reviewing external applications through its official portal.
Due to the sensitive nature of the research, strict access controls are in place. Participants must submit verified identity and professional credentials, maintain an active ChatGPT account, and sign a legally binding non-disclosure agreement. The NDA prohibits public disclosure of prompts, outputs, findings, or communications with OpenAI engineers.
OpenAI has clarified that this programme is part of its broader safety research framework, while researchers focusing on general software vulnerabilities or non-biological AI risks are directed to other existing bug bounty initiatives.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.




