[LINK] ChatGPT's odds of getting code questions correct are worse than a coin flip
Tom Worthington
tom.worthington at tomw.net.au
Fri Aug 18 08:15:10 AEST 2023
On 16/8/23 10:58, Bernard Robertson-Dunn wrote:
> Suppose you wanted to tell an AI system how to distinguish between right
> and wrong ...
More a philosophical than technical question. Like a good researcher,
you could get the AI to find what is supported by evidence. But that
doesn't necessarily make the answer factually correct, or morally right.
The problem is that the AI is mostly working on hearsay. It is
harvesting text, which is what people said was the case, not it actually
was. It will be interesting when the AI can look at direct evidence,
from sensors, or by observing the behavior of people online, not what
they say they do.
If you ask AI about a controversial topic, the answer will depend on who
put the most convincing propaganda online. How would you know if the
answer was correct: ask the government?
--
Tom Worthington, http://www.tomw.net.au
More information about the Link
mailing list