Experts are concerned about the use of ChatGPT for self-diagnosis

by nativetechdoctor
3 minutes read

AI – artificial intelligence, is understood as the algorithms that create content using machine learning methods, which are gaining popularity recently. In particular, the ChatGPT tool created by OpenAI attracts great attention.

OpenAI, the company behind ChatGPT claims the tool is interactive in a dialogue way, can answer follow-up questions, admit mistakes, challenge incorrect information, and reject inappropriate requests. . This claim suggests the AI ​​can think, feel, and react like a human, making it a big draw.

Many debates have raged around ChatGPT and similar AI tools about the possibility that they will facilitate students to cheat or replace personnel in some professions. On the other hand, they are also believed to have the potential to free up labor, helping people save time to do other jobs.

Medical and healthcare are conspicuously the areas that could soon adopt AI broadly. These are also areas where AI is more likely to be exploited and misused, especially when these tools are easily accessible to anyone on the internet. However, to date, there has not been much analysis of the potential and risks of AI tools in these areas.

Recently, billionaire Bill Gates shared that he saw the “obvious benefits” of ChatGPT for the healthcare industry and areas with a lot of information to process. According to Bill Gates, AI can help doctors prescribe and explain medical bills to patients, or support both implementation and a better understanding of legal documents.

Alan Petersen, Professor of Sociology from Monash University (Australia) has expressed concern about these predictions on the school’s website. According to him, the above statement is a promising discourse about new medical technologies and is very worrying.

Professor Alan Petersen shared: “At first glance, a sophisticated chatbot capable of producing information almost instantaneously is really attractive. Chatbots have been widely used for a while, although at times useful. However, there are still many limitations. As AIs, they depend on information collected from many online sources, including unreliable and unbiased sources, including information given based on gender, class, race, and age differences. Much online information is also personalized to create empathy and attract users. There are mechanisms to exploit emotions online designed to trick users into influencing their emotions and actions in certain ways, often to encourage people to continue online and make purchases. Sensation and emotion are far-fetched but deeply rooted in science fiction and the imagination of the masses. such as Bill Gates (founder of Microsoft), Elon Musk (current owner of Twitter), Peter Thiel (founder of PayPal), and other big tech business founders. They are not impartial when it comes to the benefits of AI. They and other billionaires will certainly look at the huge profits that can be made from AI in the medical and wellness sectors, among other industries.”

What bothers Professor Alan Petersen is people’s use of ChatGPT and similar tools for self-medication and self-diagnosis.

“It is very common to go online to self-diagnose illness. Many people go online as soon as they are sick to learn more about their condition, and find information and treatment. People search for treatment online. with the hope of being able to find “shortcuts” to simplify complex decisions.AI will certainly be widely used among people who want to find quick answers related to the treatment of diseases, and great risks. in which there are many complex medical conditions. One has to talk a lot about the dangers that inventions like AI can pose, which, while promising, also have AI can replace doctors in the future, because there are interactions between doctors and patients that AI cannot replicate,” said Professor Alan Petersen.

Related Posts

Leave a Comment

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.