You can now earn money from ChatGPT developer OpenAI by finding bugs in the popular chatbot program. The company today announced a bug bounty program that offers cash rewards in exchange for reporting security vulnerabilities in OpenAI’s systems.
“Our rewards range from $200 for low-severity findings to up to $20,000 for exceptional discoveries,” OpenAI says(Opens in a new window). The program is being run through Bugcrowd, a bug bounty platform.
However, OpenAI won’t accept jailbreaks for ChatGPT or text prompts intended to trick the AI program to violate its own rules. Since ChatGPT first emerged, users have found ways to jailbreak(Opens in a new window) the chatbot to post swear words, write about banned political topics, or even create malware.
The bug bounty program also won’t accept reports of ChatGPT generating incorrect facts. “Model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed,” OpenAI says. “Addressing these issues often involves substantial research and a broader approach.” (Users can report model safety issues using a separate form(Opens in a new window).)
Examples of the issues OpenAI’s bug bounty program won’t accept.
(Credit: Bugcrowd)
Instead, OpenAI’s bug bounty program(Opens in a new window) focuses on flaws pertaining to user privacy and cybersecurity on the company’s web domains and APIs. Last month, OpenAI apologized for a bug that briefly caused ChatGPT to leak payment details and chat histories for some users.
The company’s bug bounty program promises to help OpenAI uncover similar weaknesses with how ChatGPT processes user data, which has begun to include third-party access via the ChatGPT API and plugin store.
Recommended by Our Editors
In addition, the program’s scope also permits users to uncover bugs involving OpenAI leaking data through third-party vendors, or finding exposed API keys circulating on the internet.
However, OpenAI’s bug bounty program comes with a condition that requires participants to “keep vulnerability details confidential until authorized for release by OpenAI’s security team.” The reward amounts are also significantly less than bug bounty programs from other companies, such as Apple(Opens in a new window), which can pay up $2 million for the most severe vulnerabilities.
Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Hits: 1