OpenAI is prioritizing safety with a significant bug bounty program enhance and new AI safety analysis grants. Learn how they’re collaborating with researchers and consultants to guard their AI platforms from rising threats
OpenAI is enhancing its safety infrastructure, specializing in a forward-looking method in direction of AI, by increasing safety initiatives throughout grant packages, bug bounties, and inner defences.
In its newest blog post, OpenAI has unveiled a set of latest cybersecurity initiatives, signalling a daring push in direction of synthetic normal intelligence (AGI). A key aspect of this strategic transfer is a considerable enhance within the most reward provided by its bug bounty program, now reaching $100,000 for important findings.
As beforehand reported by HackRead.com, OpenAI launched its bug bounty program in April 2023 in partnership with Bugcrowd. This program initially centered on finding flaws in ChatGPT AI chatbot to reinforce its safety and reliability, with rewards beginning at $200 for low-severity findings and reaching $20,000 for distinctive discoveries.
Now, OpenAI has confirmed that this program is seeing a big overhaul with the utmost pay-out elevated from $20,000 to $100,000 and the scope of program being broadened considerably, a transfer OpenAI states displays its dedication to making sure customers’ belief in its methods.
“This enhance displays our dedication to rewarding significant, high-impact safety analysis that helps us defend customers and keep belief in our methods,” the corporate famous within the announcement.
To additional incentivize participation, OpenAI is introducing limited-time bonus promotions, with the primary specializing in IDOR entry management vulnerabilities. This promotion, working from March twenty sixth to April thirtieth, 2025, additionally will increase the baseline bounty vary for some of these vulnerabilities.
The corporate additionally plans to develop its Cybersecurity Grant Program, which has already funded 28 analysis tasks centered on each offensive and defensive safety methods. These tasks have explored areas reminiscent of autonomous cybersecurity defenses, safe code era, and immediate injection. The grant program is now in search of proposals for 5 new analysis areas: software program patching, mannequin privateness, detection and response, safety integration, and agentic AI safety.
OpenAI can be introducing microgrants within the type of API credit to facilitate speedy prototyping of progressive cybersecurity concepts. Moreover, it plans to have interaction in open-source safety analysis, collaborating with consultants from educational, authorities, and industrial labs to establish vulnerabilities in open-source software program code.
This shift is geared toward enhancing the power of OpenAI’s AI fashions to search out and patch safety flaws. The corporate plans to launch safety disclosures to related open-source events as vulnerabilities are found.
As well as, OpenAI is integrating its personal AI fashions into its safety infrastructure to reinforce real-time menace detection and response. To strengthen its defences, the corporate has established a brand new pink crew partnership with SpecterOps, a cybersecurity agency. This collaboration will contain laborious, simulated assaults throughout OpenAI’s infrastructure, together with company, cloud, and manufacturing environments.
As OpenAI’s consumer base expands, now serving over 400 million weekly energetic customers, the corporate acknowledges its rising duty to safeguard consumer information and methods. Whereas it focuses on growing superior AI brokers, the corporate can be addressing the distinctive safety challenges related to these applied sciences. This consists of defending towards immediate injection assaults, implementing superior entry controls, complete safety monitoring, and cryptographic protections, reinforcing their dedication to constructing safe and reliable AI.