Names that make ChatGPT ‘freeze’

by nativetechdoctor
2 minutes read

Recent reports have highlighted a puzzling issue encountered by users of ChatGPT when attempting to inquire about certain individuals. Specifically, when the name “David Mayer” is entered, the system abruptly terminates the session without responding. Similar incidents have been reported for well-known law professors, including “Jonathan Zittrain” and “Jonathan Turley.”

According to a report by 404Media, several other names have also triggered errors in ChatGPT’s responses, resulting in messages such as “I can’t create a response” or “An error occurred while creating a response.” Attempts to circumvent this issue by entering names in reverse have proven ineffective as well.

ArsTechnica has compiled a list of names that seem to disrupt ChatGPT’s functionality. In addition to the previously mentioned names, others such as Brian Hood, David Faber, and Guido Scorza also present challenges for the system.

The underlying reasons for ChatGPT’s refusal to respond to these names appear to be linked to prior inaccuracies associated with the information provided by the AI. For instance, Brian Hood threatened legal action against OpenAI in 2023 due to the dissemination of false information about him, leading the company to implement filters to mitigate inaccuracies related to his name. Similarly, ChatGPT mistakenly associated Jonathan Turley with a fabricated sexual harassment scandal, even citing a fictitious Washington Post article.

As for Jonathan Zittrain, the rationale behind the blockage of his name remains unclear. He recently penned an article advocating for the regulation of AI agents, which gained attention in a New York Times copyright lawsuit against OpenAI and Microsoft. Notably, the names of other authors mentioned in that lawsuit do not seem to provoke the same response from the AI.

The fluctuating status of David Mayer’s name adds another layer of complexity to the situation. While some speculate that it may be connected to David Mayer de Rothschild, definitive evidence is lacking.

ArsTechnica has issued a caution regarding these filtering mechanisms, suggesting they could pose challenges for ChatGPT users. For instance, it is conceivable that an individual with malicious intent could disrupt a session by inserting names in images with obscure fonts, or that adding certain names to a website might hinder ChatGPT’s ability to process relevant information.

In related legal developments, several Canadian media organizations have launched a lawsuit against OpenAI, accusing the company of utilizing their articles without authorization. They are seeking $14,239 in compensation for each infringement, and if the lawsuit does not go in OpenAI’s favor, the company may face liabilities amounting to billions of dollars.

Related Posts

Leave a Comment

Discover more from freewareshome

Subscribe now to keep reading and get access to the full archive.

Continue reading

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.