Felix Pinkston
Mar 25, 2026 17:33
OpenAI expands its safety efforts with a brand new Security Bug Bounty program centered on agentic dangers, immediate injection assaults, and knowledge exfiltration in AI merchandise.
OpenAI has launched a public Security Bug Bounty program geared toward figuring out AI abuse and security dangers throughout its product suite, marking a big enlargement of the corporate’s method to securing more and more autonomous AI programs. This system, introduced March 25, 2026, particularly targets vulnerabilities in agentic AI merchandise that would result in real-world hurt.
The brand new initiative enhances OpenAI’s present Safety Bug Bounty by accepting submissions that pose significant abuse and security dangers even after they do not qualify as conventional safety vulnerabilities. Researchers who determine points may have their submissions triaged by each Security and Safety groups, with reviews routed between applications based mostly on scope.
Agentic Dangers Take Middle Stage
This system’s scope reveals OpenAI’s rising concern about AI brokers working with growing autonomy. Key focus areas embody third-party immediate injection assaults the place malicious textual content can hijack a person’s agent—together with Browser, ChatGPT Agent, and comparable merchandise—to carry out dangerous actions or leak delicate data. To qualify for rewards, such assaults should be reproducible no less than 50% of the time.
Different in-scope vulnerabilities embody agentic merchandise performing disallowed actions on OpenAI’s web site at scale, publicity of proprietary data associated to mannequin reasoning, and bypasses of anti-automation controls or account belief alerts.
What’s Out of Scope
Customary jailbreaks will not qualify for this program. OpenAI explicitly excludes common content-policy bypasses with out demonstrable security affect—getting a mannequin to make use of impolite language or return simply searchable data would not rely. Nonetheless, the corporate runs periodic non-public campaigns centered on particular hurt sorts, together with latest applications concentrating on biorisk content material in ChatGPT Agent and GPT-5.
The corporate will think about edge circumstances on a case-by-case foundation if researchers determine flaws that create direct paths to person hurt with actionable remediation steps.
Business Implications
This launch alerts that main AI builders are taking agentic security critically as these programs achieve capabilities to browse the online, execute code, and work together with exterior companies. The Mannequin Context Protocol (MCP) dangers talked about in this system scope recommend OpenAI is especially centered on how brokers work together with third-party instruments and knowledge sources.
For the broader AI ecosystem, this program establishes a framework that different corporations might observe as autonomous brokers turn into extra prevalent. Researchers concerned about collaborating can apply by means of OpenAI’s Bugcrowd portal, with the corporate emphasizing its dedication to working alongside moral hackers to safe AI programs earlier than vulnerabilities may be exploited at scale.
Picture supply: Shutterstock
