Security risks when using ChatGPT

As tech giants race to develop artificial intelligence (AI), chatbots like ChatGPT raise concerns about data privacy between companies and regulators.

America’s largest bank JPMorgan Chase, Amazon and technology company Accenture have all restricted employees from using ChatGPT because of data security concerns.

According to CNN, the worries of these businesses are completely grounded. On March 20, OpenAI ‘s chatbot had an error and exposed user data. Although the bug was quickly fixed, the company revealed that the issue affected 1.2% of ChatGPT Plus users. The leaked information includes your name, email address, billing address, the last four digits of your credit card number, and the card’s expiration date.

On March 31, Italy’s data protection agency (Garante) issued a temporary ban on ChatGPT citing privacy concerns after OpenAI disclosed this vulnerability.

Mark McCreary – Copper President of data security and privacy at law firm Fox Rothschild LLP – told CNN that the security concerns surrounding ChatGPT are not overstated. Chatbot AI is likened by him to “a black box”.

ChatGPT was launched by OpenAI in November 2022 and quickly gained attention thanks to its ability to write essays, and compose stories or lyrics by giving commands. Tech giants like Google and Microsoft are also rolling out AI tools that work similarly, powered by massive language models trained on massive online databases

When users enter information into these tools, they don’t know how it will then be used, McCreay added. This is very worrying for companies as more and more employees are using tools to help write work emails or take notes for meetings, leading to increasing potential for trade secrets to be exposed.

Steve Mills – Director of AI Ethics at Boston Consulting Group (BCG) – says companies are concerned about employees inadvertently disclosing sensitive information. If the data people input is being used to train this AI tool, then they have lost control of the data to someone else.

According to OpenAI’s privacy policy, the company may collect all personal information and data of service users to improve the AI ​​model. They may use this information to improve or analyze their services, conduct research, communicate with users, and develop new programs and services.

The privacy policy states that OpenAI may provide personal information to third parties without notice to users unless required by law. OpenAI also has its own Terms of Service document, but the company places most of the responsibility on users to take appropriate measures when interacting with AI tools

The owner of ChatGPT has made a blog post about a safe AI approach. The company emphasizes that it does not use the data to sell services, advertise or build user profiles, OpenAI uses the data to make models more useful. For example, the user’s conversation will be used to train ChatGPT.

The privacy policy of Google, which is developing the Bard AI tool, has additional provisions for AI users. The company will select a small portion of the conversation and use automated tools to remove personally identifiable information. This way both improves Bard and protects user privacy.

Sample conversations will be reviewed by trainers and kept for up to 3 years, separate from the user’s Google account. Google also reminds users not to include personal information about themselves or others in conversations on Bard AI. The “tech giant” emphasized not to use these conversations for advertising purposes and will announce changes in the future.

Bard AI allows users to choose not to save conversations to their Google account, as well as review or delete conversations via the link. In addition, the company has safeguards designed to prevent Bard from including personal information in responses.

Steve Mills thinks that sometimes users and developers only discover hidden security risks in new technologies when it is too late. For example, autocomplete could inadvertently reveal a user’s social security number.

Users should not put anything they don’t want to share with others into these tools, Mr. Mills said

Related posts

New zero-day vulnerability is threatening all versions of Windows

Hackers claim to ‘take down’ Microsoft’s Windows and Office activation system

Apple was accused of illegally monitoring employees right at home