Security Experts ‘Trick’ ChatGPT into writing malicious code

A security expert has found a way to trick the ChatGPT artificial intelligence into writing malware that helps collect data.

Artificial intelligence (AI) like ChatGPT caused a global fever from the beginning of 2023 but this AI is not always used for positive purposes. Recently, a security expert found a way to ask ChatGPT to generate malicious code during testing.

Aaron Mulgrew – a security expert at Forcepoint company shared the risk of writing malicious code using OpenAI’s native-language chatbot developed. Although ChatGPT was designed to prevent users from asking AI to design malicious code, Aaron still found vulnerabilities by creating commands (prompts) for the artificial intelligence to write programming code separately. When combined, Aaron found in his hands an undetectable data-stealing enforcement tool, so sophisticated that it was comparable to today’s malicious code

Mulgrew’s discovery is a wake-up call about the possibility of using AI to create dangerous malware without the need for a group of hackers, even the tool’s creators did not write a single line of code.

Mulgrew’s software is disguised as a desktop application but has the ability to automatically activate on Windows-based devices. Once in the operating system, malicious code “creeps” into every file, including Word document editor, image file, and PDF to search for data to be stolen

After having what it needs, the program breaks down the information and attaches it to the image files on the device. To avoid detection, these photos are uploaded to a folder on the Google Drive cloud storage. Malware is made super powerful because Mulgrew can tweak and enhance its features to fight detection through simple commands that require input into ChatGPT .

Although this is the result of a privacy test by a security expert and no attacks have been executed outside of the test scope, cybersecurity is still aware of the dangers of activities using ChatGPT. Mulgrew claims he doesn’t have much experience in programming, but OpenAI’s artificial intelligence is still not strong and smart enough to stop his test.

Related posts

New zero-day vulnerability is threatening all versions of Windows

Hackers claim to ‘take down’ Microsoft’s Windows and Office activation system

Apple was accused of illegally monitoring employees right at home