[LINK] ChatGPT's odds of getting code questions correct are worse than a coin flip
Kim Holburn
kim at holburn.net
Tue Aug 15 09:04:06 AEST 2023
https://www.theregister.com/2023/08/07/chatgpt_stack_overflow_ai/
>ChatGPT, OpenAI's fabulating chatbot, produces wrong answers to software programming questions more than half the time, according
to a study from Purdue University. That said, the bot was convincing enough to fool a third of participants.
>The Purdue team analyzed ChatGPT’s answers to 517 Stack Overflow questions to assess the correctness, consistency,
comprehensiveness, and conciseness of ChatGPT’s answers. The US academics also conducted linguistic and sentiment analysis of the
answers, and questioned a dozen volunteer participants on the results generated by the model.
>"Our analysis shows that 52 percent of ChatGPT answers are incorrect and 77 percent are verbose," the team's paper concluded.
"Nonetheless, ChatGPT answers are still preferred 39.34 percent of the time due to their comprehensiveness and well-articulated
language style." Among the set of preferred ChatGPT answers, 77 percent were wrong.
...
>Even when the answer has a glaring error, the paper stated, two out of the 12 participants still marked the response preferred.
The paper attributes this to ChatGPT's pleasant, authoritative style.
...
>Among other findings, the authors found ChatGPT is more likely to make conceptual errors than factual ones. "Many answers are
incorrect due to ChatGPT’s incapability to understand the underlying context of the question being asked," the paper found.
--
Kim Holburn
IT Network & Security Consultant
+61 404072753
mailto:kim at holburn.net aim://kimholburn
skype://kholburn - PGP Public Key on request
More information about the Link
mailing list