[LINK] Robot cars and the fear gap

David Lochrin dlochrin at key.net.au
Fri Jul 15 15:05:19 AEST 2016


Hi Mike,

If we posit a situation where driverless technology has become so reliable that accidents are almost unknown, then I agree the ethical & legal issues would disappear.  In this scenario cars would no longer be fitted with manual controls and would be able to go anywhere, say for example, along a winding, unmarked dirt road up the side of a mountain.  Why?  Because the technology would perform much better than a human driver.

However the technology at this point is obviously far from that.  In the case of the recent accident the Tesla ran into a truck turning in front of it at a T-intersection in broad daylight, surely a very basic avoidance situation.

Leaving aside details of the low-level technology and the feasibility of the ultimate goal, I think the issues various people have alluded to in this thread relate to the interim situation which exists between now and then, and how they might be addressed.  Personally I think it'll be a long time, the technological, ethical, and legal devils are certainly in the detail

Best wishes!
(I believe we both worked for a certain now-defunct computer company of good memory!!  A book could be written about that...)
David

-------

On 2016-07-14 19:05 Michael  wrote:

> On 14 July 2016 at 18:13, David Lochrin <dlochrin at key.net.au> wrote:
> 
> >
> > <snip>
> > The core question is to what extent people are to be held responsible for
> > their actions.  Is a driverless car which kills someone the responsibility
> > of the owner, the maunfacturer, the agency which approved it, or nobody?
> >
> 
> I think we can answer this.
> Consider the simple cruise control feature of modern cars.
> If you engage it, then carelessly collide with somebody, you cannot
> reasonably argue that the manufacturer or government is to blame for
> allowing the tool.
> Now consider an advanced cruise control that provides steering and breaking
> in some circumstances. Again, we can simply state the driver is responsible
> if they do not take adequate care in its operation.
> 
> Extrapolate to a circumstance where the car effectively does all the
> driving and the person in the front seat is really just a passenger, yet
> they retain a steering wheel, brakes etc.
> Even if the driver lets the car drive itself 100% of the time, we can still
> hold the driver responsible for making that choice should the robot driver
> drive poorly because we expect the human to be supervising.
> 
> Extrapolate yet again to a future where self-driving cars have dramatically
> lowered road crash incidence, and everyone places almost complete faith in
> their superior driving ability. Perhaps the mechanism is as reliable as an
> automatic transmission is today. We don't insist on a manual clutch in case
> the auto transmission fails. We, rightly, expect it to perform flawlessly
> all the time. But on rare occasions, auto transmissions do fail. And it is
> possible that failure might cause an accident.
> 
> Such a circumstance needn't hold the manufacturer liable, if they were not
> negligent, and would be highly unlikely to see the driver charged, as an
> unforeseen mechanical failure of such rarity can't be readily blamed on the
> driver's negligence. For calculations of fault, etc. we would likely agree
> the driver's insurance must pay for repairs, medical bills etc. But being
> 'at fault' does not mean the driver was negligent.
> 
> So I can see a circumstance where insurers absorb the risk of a failed
> robot driver mechanism as part of the bundle of risks they insure against.
> And I can also see them clamouring to do so, if indeed the robot drivers
> prove safer over time.
> 
> Best regards,
> Michael Skeggs
> _______________________________________________
> Link mailing list
> Link at mailman.anu.edu.au
> http://mailman.anu.edu.au/mailman/listinfo/link
> 
> 



More information about the Link mailing list