[LINK] ChatGPT

Tom Worthington tom.worthington at tomw.net.au
Tue Jan 31 09:47:08 AEDT 2023


On 29/1/23 15:22, David wrote:

> Tom ... I would just go on setting the oven timer! ...

Yes, I use an old fashioned oven timer.

> ... each such application surrenders a part of humanity's autonomy
> and accumulated wisdom to machines. ...

Yes, but this is not a new issue. I can recall when lifts had drivers,
often war veterans missing limbs (I found them scary). Most of 
Australian trains still have drivers, but some do not. In Singapore last 
year engineers showed me a full size autonomous city bus. There is a 
loss of autonomy, and flexibility with these systems. These systems are 
safer and more efficient. But in an emergency about all they can do is 
stop. 
https://blog.tomw.net.au/2023/01/virtual-bendy-busses-for-canberra.html

> ... suppose AI ... begin to be used in decision roles which Society 
> traditionally confers on educators, the judiciary, the medical 
> establishment, the Parliament, and so on ...

I, for one, welcome our AI overlords. ;-)

More seriously, AI is already routinely used for checking for plagiarism
in student assignments, and analysis of medical scans. Provided the AI
has been tested, is at least as good as a human, and there is human 
oversight, I don't have a problem.

But we have to be careful where the AI encodes biases hidden in human
decision making, or masks deliberate discrimination under a cloak of
impartial tech.

> Who would provide the ongoing training? ...

AI programmed by a few experts, which creates its own problems.

> ... Would these AI systems train one another?

As in Colossus: The Forbin Project:
https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project

> And of course "training" can still be subverted by naughty humans...

As the current Robodebt inquiry shows, a bureaucracy made up of people
can be subverted for illegal purposes.

> ... For that matter, suppose ... fatal results to the vehicle 
> occupants? ...

Hopefully  there will be standards established for cars to communicate 
to avoid collisions. If corporations, and programmers, don't build the 
systems properly, they should be held to account. But there will be 
ethical dilemmas in that: does a car with one occupant deliberately 
crash, to save the oncoming packed family wagon?

> Delegating  human affairs to AI systems on the scale you suggest is 
> simply incompatible with human society in my view.

Perhaps. But how many preventable accidents, higher costs, less well
educated children, and innocent people sent to jail, are we willing to
accept as a result of not using AI?



-- 
Tom Worthington, http://www.tomw.net.au


More information about the Link mailing list