Microsoft’s Chatbot Says Google’s Chatbot Was… ‘quitted’

by nativetechdoctor
2 minutes read

With Big Tech rushing to launch AI chatbots, many artificial intelligence predictions are likely to degrade the information ecosystem on the web. Recently, an experiment on The Verge noted: Microsoft’s Bing said that Google’s Bard was “fired” after it cited a fake joke made up.

On March 22, when some users asked Microsoft’s Bing chatbot if Google’s Bard chatbot had “stopped working”, it answered yes. The evidence Bing cites is a joking comment on Hacker News on March 21, revolving around the spreading of a joke on ChatGPT that Google will “shut down” Bard in a year. At the time, Bard missed the context and humorous nature of the comment and admitted it as a fact. And now it’s Bing’s turn to repeat this false “truth”.

Now, Bing has changed the answer to verify that Bard is still active. However, many feedbacks about feelings of insecurity with the artificial intelligence system. AI is now both malleable and inconsistent see misinformation. Chatbots can’t judge reliable news sources, misread stories about themselves, and misreport their own abilities (and in this case, the whole thing starts with just one word). a joking comment).

According to analysis from many cybersecurity experts, although this is a somewhat “ridiculous” situation, it can have serious consequences. Because AI language models are not yet capable of effectively classifying information, the current widespread launch of chatbots risks creating a lot of information dents along with mistrust – potential dangers that cannot be circumscribed. zone or fully debuggable. Many users say, perhaps because Microsoft, Google, and OpenAI have decided that market share is more important than information security.

“Imagine what users can do if they want AI systems to fail. These companies can put out as many disclaimers as they like on their chatbots – telling people they are “experimental”, “collaborative” and “non-search engine” – but this is still the case. fragile information protection measures. Many users have come to know how people use artificial intelligence systems, and among them have not stopped spreading misinformation, including using AI to create new stories that have never been written or told. about non-existent books. And now, chatbots are also citing each other’s mistakes,” said James Vincent, technology reporter at The Verge

Related Posts

Leave a Comment

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.