[LINK] Self-driving cars .. kill you to save multiple others?
Marghanita da Cruz
marghanita at ramin.com.au
Sat Jun 20 08:40:34 AEST 2015
This principle and law is discussed in a recent judgement in the NSW Land and Environment court:
> Medium Neutral Citation:
> Leichhardt Council v Geitonia Pty Ltd (No 6) [2015] NSWLEC 51
...
> ISSUE 3: THE DEFENCE OF NECESSITY
>
> Alternatively, Geitonia and Mr Gertos raise a defence of necessity.
>
> Legal principles
>
> The availability of the defence of necessity has been recognised in extreme and strictly limited circumstances: Fairall and Yeo, Criminal Defences in Australia (4th ed, 2005, LexisNexis) Chapter 6. The defence has rarely been successful. The jurisprudence in England and Canada was reviewed in In Re A (Children) (Conjoined Twins: Surgical Separation) [2000] 4 All ER 961 (CA) at 1032-1052.
>
> The common law defence of necessity excuses what would otherwise be a criminal offence. It is probably correct to say that all statutes creating statutory offences are to be construed as being subject to the common law defence of necessity: R v Loughnan [1981] VR 443 (FC/SC of Vic) at 458 per Crockett J. I will proceed on the assumption that s 76A(1) EPA Act, creating the statutory offence with which the defendants are charged, is to be construed as being subject to the common law defence of necessity.
>
> A high bar was set for the defence of necessity in a case of cannibalism on the high seas, The Queen v Dudley and Stephens (1884) 14 QBD 273. Four shipwrecked sailors were adrift in an open boat on the high seas more than one thousand miles from land. One of their number, the cabin boy, was the youngest and eventually became the weakest. After 20 days adrift they had been without food for seven days and without water for five. Dudley and Stephens killed the cabin boy and (with the fourth sailor) ate his flesh and drank his blood. Four days later, a passing ship rescued them in the lowest state of prostration. The two killers were tried for murder. Their defence of necessity was that if they did not kill and feed on one of their number, they would all die of starvation. Delivering the judgment of a court consisting of five judges, Lord Coleridge CJ rejected the defence of necessity, convicted them of murder and sentenced them to death. Acknowledging that the prisoners we
re subjec
t to “sufferings which might break down the bodily powers of the strongest man, and try the conscience of the best” (at 278), Lord Coleridge intimated that the Crown might exercise the prerogative power of mercy (at 288). This the Crown later did, by commuting the death sentence to six months in prison.
>
> In Loughnan, the three elements of the defence of necessity were identified and explained, at 448-449:
>
> …there are three elements involved in the defence of necessity. First, the criminal act or acts must have been done only in order to avoid certain consequences which would have inflicted irreparable evil upon the accused or upon others whom he was bound to protect. The limits of this element are at present ill defined and where those limits should lie is a matter of debate. But we need not discuss this element further because the irreparable evil relied upon in the present case was a threat of death and if the law recognizes the defence of necessity in any case it must surely do so where the consequence to be avoided was the death of the accused. We prefer to reserve for consideration if it should arise what other consequence might be sufficient to justify the defence…
>
> The other two elements involved…can for convenience be given the labels, immediate peril and proportion, although the expression of what is embodied in those two elements will necessarily vary from one type of situation to another.
>
> The element of imminent peril means that the accused must honestly believe on reasonable grounds that he was placed in a situation of imminent peril...all the cases in which a plea of necessity has succeeded are cases which deal with an urgent situation of imminent peril. Thus if there is an interval of time between the threat and its expected execution it will be very rarely if ever that a defence of necessity can succeed.
>
> The element of proportion simply means that the acts done to avoid the imminent peril must not be out of proportion to the peril to be avoided. Put in another way, the test is: would a reasonable man in the position of the accused have considered that he had any alternative to doing what he did to avoid the peril?…
..
http://www.caselaw.nsw.gov.au/decision/551b725ae4b04b50e5e96df9
The law of n
On 19/06/15 23:50, Frank O'Connor wrote:
> Nice one, Stephen :)
>
> My BIG problem with ethical arguments like the one below is that in the real world we’d never even have to time to make them or consider the implications therein.
>
> You’re about to prang in to something/somebody, you have about a second to make a decision, you tend to go for the first panicked alternative you have … or do nothing and just let it all happen.
>
> What the philosopher doesn’t consider is that the computer, probably for the first time in history, would have the time to digest all the inputs and make a rationale pre-agreed-on ethical decision … rather than go with the panicked-poo-in-the-pants reactions (or non-reactions) that have otherwise predominated in the case of tragedies like this.
>
> And if, say, a Utilitarian argument … the greatest good for the greatest number … was routinely used in vehicle emergencies, then EVERY bystander or party who could affect the equation would know that their obligation was to get the kid out of the way rather than waste their effort on the crowded trolley. Again, eliminating the need for decision-making, and presumably decreasing the response time, amongst bystanders who could affect the outcome.
>
> And yeah, it’s not an answer to the philosophical conundrum … and just my 2 cents worth …
> ---
>> On 19 Jun 2015, at 10:10 pm, Stephen Loosley <stephenloosley at zoho.com> wrote:
>>
>> Will your self-driving car be programmed to kill you if it means saving more strangers?
>>
>> Date: June 15, 2015
>> Source: University of Alabama at Birmingham
>> http://www.sciencedaily.com/releases/2015/06/150615124719.htm
>>
>>
>> Summary: The computer brains inside autonomous vehicles will be fast enough to make life-or-death decisions. But should they? A bioethicist weighs in on a thorny problem of the dawning robot age.
>>
>>
>> Imagine you are in charge of the switch on a trolley track. The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that's why you have this switch. But on the alternate track there's more trouble: Your child, who has come to work with you, has fallen down on the rails and can't get up. That switch can save your child or a bus-full of others, but not both. What do you do?
>>
>> This ethical puzzler is commonly known as the Trolley Problem. It's a standard topic in philosophy and ethics classes, because your answer says a lot about how you view the world. But in a very 21st-century take, several writers have adapted the scenario to a modern obsession: autonomous vehicles.
>>
>> Google's self-driving cars have already driven 1.7 million miles on American roads, and have never been the cause of an accident during that time, the company says. Volvo says it will have a self-driving model on Swedish highways by 2017. Elon Musk says the technology is so close that he can have current-model Teslas ready to take the wheel on "major roads" by this summer.
>>
>> Who watches the watchers?
>>
>> The technology may have arrived, but are we ready?
>>
>> Google's cars can already handle real-world hazards, such as cars' suddenly swerving in front of them. But in some situations, a crash is unavoidable. (In fact, Google's cars have been in dozens of minor accidents, all of which the company blames on human drivers.) How will a Google car, or an ultra-safe Volvo, be programmed to handle a no-win situation -- a blown tire, perhaps -- where it must choose between swerving into oncoming traffic or steering directly into a retaining wall?
>>
>> The computers will certainly be fast enough to make a reasoned judgment within milliseconds. They would have time to scan the cars ahead and identify the one most likely to survive a collision, for example, or the one with the most other humans inside. But should they be programmed to make the decision that is best for their owners? Or the choice that does the least harm -- even if that means choosing to slam into a retaining wall to avoid hitting an oncoming school bus? Who will make that call, and how will they decide?
>>
>> "Ultimately, this problem devolves into a choice between utilitarianism and deontology," said UAB alumnus Ameen Barghi. Barghi, who graduated in May and is headed to Oxford University this fall as UAB's third Rhodes Scholar, is no stranger to moral dilemmas. He was a senior leader on UAB's Bioethics Bowl team, which won the 2015 national championship. Their winning debates included such topics as the use of clinical trials for Ebola virus, and the ethics of a hypothetical drug that could make people fall in love with each other. In last year's Ethics Bowl competition, the team argued another provocative question related to autonomous vehicles: If they turn out to be far safer than regular cars, would the government be justified in banning human driving completely? (Their answer, in a nutshell: yes.)
>>
>> Death in the driver's seat
>>
>> So should your self-driving car be programmed to kill you in order to save others? There are two philosophical approaches to this type of question, Barghi says. "Utilitarianism tells us that we should always do what will produce the greatest happiness for the greatest number of people," he explained. In other words, if it comes down to a choice between sending you into a concrete wall or swerving into the path of an oncoming bus, your car should be programmed to do the former.
>>
>> Deontology, on the other hand, argues that "some values are simply categorically always true," Barghi continued. "For example, murder is always wrong, and we should never do it." Going back to the trolley problem, "even if shifting the trolley will save five lives, we shouldn't do it because we would be actively killing one," Barghi said. And, despite the odds, a self-driving car shouldn't be programmed to choose to sacrifice its driver to keep others out of harm's way.
>>
>> Every variation of the trolley problem -- and there are many: What if the one person is your child? Your only child? What if the five people are murderers? -- simply "asks the user to pick whether he has chosen to stick with deontology or utilitarianism," Barghi continued. If the answer is utilitarianism, then there is another decision to be made, Barghi adds: rule or act utilitarianism.
>>
>> "Rule utilitarianism says that we must always pick the most utilitarian action regardless of the circumstances -- so this would make the choice easy for each version of the trolley problem," Barghi said: Count up the individuals involved and go with the option that benefits the majority.
>>
>> But act utilitarianism, he continued, "says that we must consider each individual act as a separate subset action." That means that there are no hard-and-fast rules; each situation is a special case. So how can a computer be programmed to handle them all?
>>
>> "A computer cannot be programmed to handle them all," said Gregory Pence, Ph.D., chair of the UAB College of Arts and Sciences Department of Philosophy. "We know this by considering the history of ethics. Casuistry, or applied Christian ethics based on St. Thomas, tried to give an answer in advance for every problem in medicine. It failed miserably, both because many cases have unique circumstances and because medicine constantly changes."
>>
>> Preparing for the worst
>>
>> The members of UAB's Ethics and Bioethics teams spend a great deal of time wrestling with these types of questions, which combine philosophy and futurism. Both teams are led by Pence, a well-known medical ethicist who has trained UAB medical students for decades.
>>
>> To arrive at their conclusions, the UAB team engages in passionate debate, says Barghi. "Along with Dr. Pence's input, we constantly argue positions, and everyone on the team at some point plays devil's advocate for the case," he said. "We try to hammer out as many potential positions and rebuttals to our case before the tournament as we can so as to provide the most comprehensive understanding of the topic. Sometimes, we will totally change our position a couple of days before the tournament because of a certain piece of input that was previously not considered."
>>
>> That happened this year when the team was prepping a case on physician addiction and medical licensure. "Our original position was to ensure the safety of our patients as the highest priority and try to remove these physicians from the workforce as soon as possible," Barghi said. "However, after we met with Dr. Sandra Frazier" -- who specializes in physicians' health issues -- "we quickly learned to treat addiction as a disease and totally changed the course of our case."
>>
>> Barghi, who plans to become a clinician-scientist, says that ethics competitions are helpful practice for future health care professionals. "Although physicians don't get a month of preparation before every ethical decision they have to make, activities like the ethics bowl provide miniature simulations of real-world patient care and policy decision-making," Barghi said. "Besides that, it also provides an avenue for previously shy individuals to become more articulate and confident in their arguments."
>>
>> _______________________________________________
>> Link mailing list
>> Link at mailman.anu.edu.au
>> http://mailman.anu.edu.au/mailman/listinfo/link
>
>
> _______________________________________________
> Link mailing list
> Link at mailman.anu.edu.au
> http://mailman.anu.edu.au/mailman/listinfo/link
>
--
Marghanita da Cruz
Telephone: 0414-869202
http://www.ramin.com.au
More information about the Link
mailing list