[LINK] Robot cars and the fear gap

Karl Auer kauer at biplane.com.au
Thu Jul 14 16:22:29 AEST 2016


On Thu, 2016-07-14 at 15:01 +1000, Brendan wrote:
> On 07/14/2016 02:06 PM, Karl Auer wrote:
> > The fallacy in the argument as it applies to autonomous vehicles is
> The argument has nothing whatsoever to do with autonomous vehicles
> making decisions.

OK. It turns out you were applying "the trolley problem" with the
trolley being the robot cars themselves. Do we go down this track and
let these people die or that other track and let those people die.
  
> I can only assume that you have not understood.

Well, yes and no. *Any* real-world application of the trolley problem
is fallacious.

> *If presented with this data*, then the choice by *human decision
> makers* to allow the vehicles on the road is not only a choice about 
> how many people are going to die, but it's also a choice about what 
> type of people (ie class A or class B) they choose to let die.

It's fallacious for the same reasons I gave - there is no clear line
between the two classes, or any way to create one.

There are three other reasons as well.

The first is that a major aspect of the trolley problem is time; the
switcher must make a decision now, without time to consider deeply. In
the trolley problem, inaction is a decision, and it's a decision with a
clear outcome.

The second is that the trolley problem provides almost no information
to the switcher to base a decision on - just a body count, that's it.
That is too few facts. Besides which, human decision makers in the real
world rarely if ever use the facts, and I would venture to suggest
NEVER use ONLY the facts, to make their decisions.

The third is that there is no way to know, except in the light of
experience, what other risks may surface as a result of widespread use
of autonomous vehicles. Made-up examples: How many people will die
because they assume autonomous vehicles have "seen" then and will/can
avoid hitting them? How many young men will die playing chicken with
robot vehicles? How many people will die due to failures or subversions
of autonomous vehicles? And so on. In other words, there are more than
just two possible outcomes.

In short, if the situation you posit were actually possible, then you
might have a point, but it's not. Human decision makers are not facing
a trolley problem; they are facing a problem that requires judgement
and wisdom.

Regards, K.

-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Karl Auer (kauer at biplane.com.au)
http://www.biplane.com.au/kauer
http://twitter.com/kauer389

GPG fingerprint: E00D 64ED 9C6A 8605 21E0 0ED0 EE64 2BEE CBCB C38B
Old fingerprint: 3C41 82BE A9E7 99A1 B931 5AE7 7638 0147 2C3C 2AC4






More information about the Link mailing list