The hacker’s failed attempt still raises security concerns if the ChatGPT artificial intelligence tool or similar AI falls into the wrong hands.
According to cybersecurity firm Check Point Software Technologies (CPST), there are many attacks targeting OpenAI’s ChatGPT artificial intelligence program.
“We have noticed that there are attacks from Russian hackers to circumvent geographical barriers that are being applied on ChatGPT,” said Pete Nicoletti, head of security information at CPST. The “barrier” that Nicoletti mentioned is the ability to limit access to ChatGPT’s application programming interface (API), which helps block queries from Russia today.
He did not detail the tool that helps CPST test the system to detect unauthorized access attempts from hackers. The company’s leadership also said that the Russian hacker’s probing activity only reflects part of the growing efforts of the parties in trying to gain control of ChatGPT.
Nicoletti also mentioned the case that happened on the social network Reddit when someone tried to exploit ChatGPT in a negative way by teaching artificial intelligence a “new personality” called DAN (short for Do Anything Now). – Do anything). This person used input commands to manipulate ChatGPT so that this AI produces content that is beyond the framework of artificial intelligence, including hate speech.
It’s not clear at this time if there’s been undetected exploitation of the ChatGPT vulnerability, but if there is, it’s more likely an advanced form of a phishing attack. But even so, this is still a matter of concern.
According to Popular Mechanics, there have been many concerns, and that is why analysts put artificial intelligence (AGI) at the top of the list of concerns about the risk to the world. Australian parliamentarian Julian Hill thinks that if people start thinking, it won’t take long to realize the disruptive and catastrophic risks posed by untamed AGI are “real, reasonable, and easy”. visualization”. According to him, AGI has the potential to revolutionize our world in ways we cannot yet imagine, but if AGI surpasses human intelligence, it could cause significant harm to humanity. disqualify if its goals and motives do not align with our goals and motives.