[LINK] ChatGPT's odds of getting code questions correct are worse than a coin flip
Jan Whitaker
jwhit at internode.on.net
Fri Aug 18 08:56:46 AEST 2023
I believe you can ask it to limit its 'answer' to some subset of the
data, e.g. something like reports/articles/journals that include a
reference/citation list. Has anyone tried that? Then the 'hearsay' is
somewhat mitigated if it hasn't made up the references, which it has done!
On 18/08/2023 8:15 am, Tom Worthington wrote:
> On 16/8/23 10:58, Bernard Robertson-Dunn wrote:
>
>> Suppose you wanted to tell an AI system how to distinguish between
>> right and wrong ...
>
> More a philosophical than technical question. Like a good researcher,
> you could get the AI to find what is supported by evidence. But that
> doesn't necessarily make the answer factually correct, or morally right.
>
> The problem is that the AI is mostly working on hearsay. It is
> harvesting text, which is what people said was the case, not it
> actually was. It will be interesting when the AI can look at direct
> evidence, from sensors, or by observing the behavior of people online,
> not what they say they do.
>
> If you ask AI about a controversial topic, the answer will depend on
> who put the most convincing propaganda online. How would you know if
> the answer was correct: ask the government?
>
>
More information about the Link
mailing list