ChatGPT mostly answers programming questions incorrectly

It’s important to note that based on a recent study from Purdue University (USA) as reported by TechRadar, ChatGPT’s reliability in answering programming-related questions has been called into question. The study revealed that over half (52%) of ChatGPT’s responses to 517 programming questions on Stack Overflow were found to be incorrect. These errors included conceptual misunderstandings (54%), incorrect information (36%), logic errors in code (28%), and terminology errors (12%).

While ChatGPT does provide detailed and extensive answers, this can sometimes add to user confusion. Despite this, some programmers appreciate the clear and ‘textbook standard’ wording of ChatGPT’s responses.

The study underscores the need for caution when relying on ChatGPT for programming tasks. The researchers emphasized the necessity for further research to identify and rectify these errors, as well as the importance of improving transparency regarding the accuracy of information provided by ChatGPT.

Although generative AI like ChatGPT was expected to be a valuable support tool for programmers, this study reveals that there is still significant work to be done to enhance reliability and accuracy in this area.

Related posts

GTA 6 is guaranteed to launch on time, Take-Two quashes delay rumors

Be wary of SteelFox malware attacking Windows using a copyright-cracking tool

Apple chose Foxconn and Lenovo to develop an AI server based on Apple Silicon