12

Catholic Ethics, the Trolley Car Problem and Driverless Cars

Oh, I hate the cheap severity of abstract ethics.”  – Oscar Wilde

A recent exchange of comments on Don McClarey’s post, “How Many Lights“, put me in mind of that famous problem in ethics, “The Trolley Problem”, (illustrated in the featured image), and what Catholic teaching on ethics might have to say about it.    Here’s the problem–there are many variations; see the linked article:

A trolley car with defective brakes is heading down a track on which five people are standing;  you can throw a switch to deflect the trolley onto a side track on which only one person is standing;  if you throw the switch, one person will be killed; if you don’t, five people will be killed.   (This ignores an obvious solution – yell to the five people, “Hey you dolts, get off the track” – but then where would this discussion go?)    You might say choose the lesser evil, where only one person is killed.  But suppose that one person is a brilliant teen ager, a 17 year old grad student in molecular biology, and the five on the main track are convicted killers, working on a chain gang.    Or should you put everything in the hands of God and pray that he sends a lightning bolt to destroy the trolley car?

What does Catholic ethical teaching have to say about this problem?   If you do a web search, “lesser evil  Catholic Catechism”, you’ll get a number of references, many of them on examination, unsound.   Why?  The point here is that choosing a lesser evil is not justified by Catholic morality;  one cannot use a “the ends justify the means” argument to justify doing an evil act.    How do you find a way to act in the real world, where choices aren’t clear-cut?

A guide for the perplexed in these situations is the “double effect” principle, first proposed by St. Thomas Aquinas (CCC 2263).  This principle differs in subtle but important ways from the notion of choosing the lesser of several evils.    George Weigel gives an excellent overview here: I’ll excerpt his quote from the National Catholic Bioethics Center in Philadephia:

“The principle of double effect in the Church’s moral tradition teaches that one may perform a good action even if it is foreseen that a bad effect will arise only if four conditions are met: 1) The act itself must be good. 2) The only thing that one can intend is the good act, not the foreseen but unintended bad effect. 3) The good effect cannot arise from the bad effect; otherwise, one would do evil to achieve good. 4) The unintended but foreseen bad effect cannot be disproportionate to the good being performed.”

How does the double effect principle apply to the trolley problem?   Let’s examine the two alternatives, throwing the switch and not throwing the switch,  taking into account the four conditions stipulated by the Bioethics Center.

  • First, if you throw the switch or don’t throw the switch, is that act in itself, good or bad? In this case, unlike the surgical example given in Weigel’s article, the act in itself has no moral status;  the only good or bad will come from the consequences, and as I suggested above, that can only be known from knowing more about the situation than the relative number of people on the two tracks.
  • Second, it’s obvious that if you throw the switch, you intend good–your intention isn’t to kill the one person on the side track;  likewise, if you don’t throw the switch, you don’t intend to kill the five people on the main track, but rather to save the one person on the side track.
  • Third, killing one person on the side track is not the direct cause of saving the five on the main track, nor is killing five people on the main track (if the switch is not pulled) the direct cause of killing one person on the side track.
  • Fourth, the disproportionateness (if this be a word) of either pulling or not pulling the switch can not be assessed until more is known about all the people involved.   And that becomes quite a tricky and messy business, verging on that bad mode of ethical analysis, utilitarianism.

So, it seems the double effect principle doesn’t help us that much in finding an answer to the trolley problem unless we know more about what’s going on.

Let’s turn to another thought experiment, more in keeping with our present times, the problem of the driverless car.  Let’s imagine the following situation: a driverless car is going down a steep hill;  on either side of the road are steep drop-offs, with very flimsy guard rails;   at the bottom of the hill is a school crossing on which a line of school children are passing;  the brakes of the driverless car fail and it starts to accelerate down the hill toward the  school children.

Let’s add two more alternative conditions to our thought experiment: 1) there’s no passenger in the driverless car; 2) the car has a passenger in it    Let’s consider the first condition and the implied precondition:  the driverless car does what  its computer program tells it to do.    I’m also going to assume that any AI (Artificial Intelligence) device that is not  a passive instrument will be set up to follow Isaac Asimov’s Three Laws of Robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

–Isaac Asimov, Runaround.

Clearly, the Robotic Laws would require the driverless car to drive off the edge of the road and possibly destroy itself, violating the third law, but obeying the first two.  We haven’t had to consider the double effect principle, because the Three Laws of Robotics (or their software equivalent) have dealt with the situation.

Now let’s consider the second condition, that there is a passenger in the car.    Let’s also assume that the passenger can not override the car’s program instructions.   What should the program do in this case, which is essentially the trolley problem in a different guise.   As in the trolley problem, it’s not clear how the double effect principle might be applied.

Finally, let’s assume that the passenger can override the program and drive the car.   Recall the brakes don’t work.   There are side barriers, but they are weak.   Possibly a driver might try skidding on the side barriers to slow the car down, but if that didn’t work, should he/she try to drive over the cliff, in effect commit suicide?   Suicide is a sin, but he/she isn’t intending to kill himself/herself.

The act of driving the car off the road is either good (avoiding killing the children below) or neutral, so condition 1) of the double effect principle applies;  the only intended thing is to save the children, so condition 2) holds;  not killing the children is not a direct consequence of  the driver being killed–the latter is an unintended byproduct of the car going over the cliff, so condition 3) applies;  condition 4), the intended good is proportionately greater than the evil–we can invoke, as in the sinking Titanic, women and children in the lifeboats first.  So, all four conditions for the double effect principle apply,  if driving off the road and over the cliff is the only way the driver can avoid hitting the children.  We can further complicate this thought experiment by adding in more passengers, including a pregnant woman.   I’ll leave the analysis of that to the reader.

Let me ask you, dear reader, do you think it will be possible to program ethical principles, including that for the double effect, into AI devices?  I don’t.