The startup behind the famous ChatGPT AI chatbot, OpenAI, has opened a security bug-hunting program to receive a bounty with a maximum of $20,000 for detected bugs.
According to The Hacker News, OpenAI has partnered with a security platform from the Bugcrowd community for researchers to report discovered vulnerabilities in ChatGPT. Each reported security bug will be exchanged for a reward ranging from $200 for low severity and up to $20,000 for special bugs.
This program does not allow us to look for model safety errors or content rendering problems such as malicious code generation or misleading content. The company says addressing these issues involves broader research and an approach
Some other prohibited issues include denial of service (DoS) attacks, forced entry of content into OpenAI APIs (brute-force), demonstrations of data destruction, or unauthorized access to information. sensitive.
Content that is allowed to be searched for is within the OpenAI API, ChatGPT (including plugins), third-party integrations, API public keys, and any domains operated by this startup.
With more and more people using ChatGPT, OpenAI wants to catch any potential issues quickly to ensure smooth system operation and prevent any weaknesses from being exploited. Therefore, OpenAI hopes that by engaging with the tech community, they can solve any problems before they become more serious
This move is said to be in response to OpenAI’s patching of account takeover and data exposure in ChatGPT. The incident prompted the Italian data protection agency to ban this chatbot to conduct a thorough investigation.