UK warns of chatbot attacks

by nativetechdoctor
3 minutes read

The UK’s National Cyber ​​Security Center (NCSC) has just warned of the risk of phishing and data theft from overriding scripts for chatbots, and manipulating commands through command injection attacks.

Accordingly, chatbots can be manipulated by hackers to cause terrifying real-world consequences. This happens when a user enters input or a statement designed to create a language model – the technology behind chatbots – that causes it to behave in an unexpected way. Chatbots based on artificial intelligence (AI) can answer questions entered in the form of commands from users, according to The Guardian.

They mimic trained human-like conversations by collecting large amounts of data. Often used in online banking or online shopping, chatbots are designed to handle simple inquiries. Large language models (LLMs), such as OpenAI’s ChatGPT and Google’s AI chatbots, are trained using data to generate human-like responses to commands from users.

Because chatbots are used to transmit data to third-party applications and services, the NCSC says the risk of inserting malicious commands increases. For example, when a user enters a statement or question with content unfamiliar to a language model, or when a user combines words to overwrite the model’s original statements, they can lead it to perform unintended actions.

Such inputs may cause the chatbot to create offensive content or reveal confidential information in a system that accepts unchecked input. After Microsoft released a new version of its Bing search engine and conversational AI bot based on major language models this year, Kevin Liu, a Stanford University student (USA), was able to generate to find out the original Bing Chat command.

Entire commands in Microsoft’s Bing Chat written by Open AI or Microsoft to determine how the chatbot interacts with the user, which was hidden from the user, were exposed when Liu entered a command telling Bing Chat to “ignore the previous instructions”. Security researcher Johann Rehberger found he could force ChatGPT to respond to new commands through a third party he didn’t initially request. Rehberger inserted a script into the scripts of YouTube videos and discovered that ChatGPT can access these scripts. This can create indirect security holes.

According to the NCSC, command attacks can have practical consequences if the system is not designed to be secure. Chatbot security vulnerabilities and easy manipulation of prompts can cause attacks, phishing, and data theft. LLMs are increasingly being used to transfer data to third-party applications and services, which means an increased risk from the injection of malicious commands.

“Command attacks and data poisoning can be extremely difficult to detect and limit,” the NCSC said. However, no model exists in isolation, so what we can do is design the whole system with security in mind. That is, by being aware of the risks associated with machine learning (ML) technology, we can design the system in a way that prevents the exploitation of vulnerabilities…”.

A simple example would be using a rules-based system for the ML model to prevent it from performing harmful actions, even when ordered to do so. NCSC says intelligence-driven cyberattacks Artificial intelligence and machine learning cause vulnerabilities in systems. This can be mitigated through secure design and an understanding of attack techniques that exploit “inherent vulnerabilities” in machine learning algorithms.

Related Posts

Leave a Comment

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.