49

My understanding is that John M. Taurek suggests that, in the trolley problem we should flip a coin when deciding between saving 5 lives versus 1 life (assuming we do not know any of these people). He says that this gives everyone an equal chance of survival, which is most fair/reasonable to him.

This seems inherently wrong to me, but I can't understand why without appealing to utilitarianism. How can I argue against this without appealing to utilitarianism?

user32889
  • 533
  • 1
  • 4
  • 8

18 Answers18

106

The ethical problem is that you pretend to avoid making a decision - but you actually already made a decision, namely that both of these outcomes are equal enough to justify a 50/50 choice.

Tom
  • 2,121
  • 1
  • 9
  • 13
  • 4
    Even before the coin is tossed, there is an ethical problem of quantifying the importance of lives purely based on comparing numbers. – dtech Apr 27 '18 at 20:51
  • 1
    That's beside the argument - which is that you lack the information you could base such a decision on in the first place. Also, you assume that it is all about the outcome (number of persons saved), i.e. a utilitarian/consequentialist view, which is explicitly what the question excludes from an answer. – Philip Klöcking Apr 28 '18 at 14:30
  • 2
    Siding with @PhilipKlöcking on this ...it should be quite easy to see the availability of a line of logic which entirely disregards the consequences and is merely deciding which of two branches a runaway trolley shall take. That seems to me to be exactly what the question is trying to investigate: how would you assert that one branch carries more weight than the other without appealing to disproportionate consequences. I say you can't. Without the consequences, it's 50/50. – K. Alan Bates Apr 29 '18 at 02:12
  • Even within a utilitarian framework, your own perception of utility(there is no objective measure of utility) and situational knowledge would invariably apply. 5 Octogenarians vs 1 child; 5 construction workers vs 1 scientist; 5 politicians vs 1 of "anything else" do not necessarily carry a 5-1 weight even within a utilitarian view. It's been my opinion that the most important insight here is that there is no objective answer to the problem. Choose your subjectivity wisely. – K. Alan Bates Apr 29 '18 at 03:02
  • 2
    @K.AlanBates - without the consequences, there is no ethical dilemma. If you reduce the question to "which of two tracks should a trolley take?" without information about what the decision entails, then there is no ethical dimension. – Tom Dec 27 '18 at 14:51
  • 1
    I'm missing something. How did this answer get so many upvotes? The basic claim that "both of these outcomes are equal enough to justify a 50/50 choice" is blatantly wrong: one outcome kills one person, the other outcome kills several people. It misses the whole point of the question. The question is about Taurek's observation that flipping a coin gives each individual potential victim a 50% chance of survival, not that the potential outcomes are equal. – Ray Butterworth Oct 24 '19 at 03:17
  • 2
    @RayButterworth you didn't read the answer closely. It does not state the the outcomes are equal. It states that if you flip a coin you already decided that doing so is a fair decision, i.e. the outcomes are equal. The answer specifically states that because of that, flipping a coin is not avoiding a decision, but making one. – Tom Oct 24 '19 at 05:22
  • It says that you decided that the outcomes are "equal enough". The outcomes are killing one person or killing five people. I don't see what criteria makes these seem "equal enough", when they are so obviously very unequal. – Ray Butterworth Oct 24 '19 at 13:21
  • 2
    @RayButterworth but that exactly is the point. I think everyone else got it. Please read the answer carefully. You will find that it points out the exact thing you are arguing. – Tom Oct 24 '19 at 15:21
66

This is known as the trolley problem. There is a runaway trolley and people tied to the tracks: switch to kill 1 and save 5 or do nothing and let the 5 die. Perhaps the most effective reductio of Taurek's proposal is to up the ante, instead of 1 vs 5 take 1 vs 5 billion, his logic still suggests that a coin should be flipped, "let the world die but the justice prevail". However, all standard ethical systems would not endorse Taurek's solution. A Kantian deontologist would have to do nothing because either switching or flipping a coin is against the moral duty (to not willfully kill), and "inherently wrong". A virtue ethicist would have to switch, because switching is a compassionate act, and compassion is a virtue. And most forms of consequentialism, not just utilitarianism, would endorse switching because consequences of the 5 surviving are likely to be superior even if there is no single utility, and hence some calculus on human lives. Indeed, it is hard to come up with ethics that does endorse Taurek's solution: it would have to be a form of deontology where "equal justice for all" is the highest moral duty.

Empirical studies show that about 90% choose to switch unless the 1 is a relative or a lover, and then it drops steeply. This does not bode well for general ethical arguments, and suggests situational ethics with "the devil in the details", abstracting from which trolley problems are often criticized for.

Conifold
  • 43,055
  • 4
  • 95
  • 182
  • 4
    Given the state of the planet there is even a potential argument for saving as few as possible. For me the key factor would be that it is not the action that will decide the ethical value of a decision but the motive behind it. If we do the best we can to make an ethically sound decision then we will have succeeded. –  Apr 25 '18 at 13:03
  • I can completely understand the concept of "giving each individual a fair chance of survival. While nearly all of us would pull the switch to kill the 1, rather than the 5, I'm sure we'd be rather more in favour of justice in the event that one of us found ourselves as the "1" rather than a member of the "5" – Jon Story Apr 25 '18 at 14:11
  • 5
    @JonStory, in the case if one of use would be the 1, that's not the justice that we refer to. That's just the desire to live. – rus9384 Apr 25 '18 at 16:13
  • 1
    Wouldn't it make more mathematical sense to flip once for the one person, and then five times for the five people (thus six times) and then deduce some sort of outcome? I'm not a mathematician, but something along those lines seems more fair to me. – Asleepace Apr 25 '18 at 22:47
  • 1
    @Asleepace, let each person to have a flip and the group that had more coin faces gets saved. Statistically that would be consistent with my reasoning. – rus9384 Apr 25 '18 at 23:31
  • 1
    And I would throw the switch halfway, forcing a derailment on the spot. (Which is exactly what's going to happen to a runaway trolley sooner or later anyway.) – Joshua Apr 26 '18 at 02:47
  • 2
    I think that in practice most people value other people's lives the higher the closer they are to them. At least the value of a life is not regarded as a constant. – NoDataDumpNoContribution Apr 26 '18 at 08:39
  • @Joshua, interesting idea, but what about the people in the trolley? They could get hurt. – Solomon Ucko Apr 27 '18 at 00:04
  • You might be interested in this video that had some people literally playing out the trolley problem. I think the "90% choose to switch" claim is too high for real life scenarios. – David Starkey Apr 27 '18 at 14:53
  • I think that the Kantian solution should actually be do something to save someone if you can, and as the decision whom to save (which is an ethical, not a moral one in the narrow sense) will be undecidable due to the absolute worth of every single person, giving every single person the exact same chance to survive via an appropriate mechanism indeed becomes the best solution. Long story short, there are some arguments from a rational moral philosophy for that, intuitionist and utilitarian/consequentialist sentiments (which are indeed prevalent even in "Kantian" authors) aside. – Philip Klöcking Apr 28 '18 at 14:35
  • you ignore the Malthusian option, which would choose to kill the 5 billion by preference. – K. Alan Bates Apr 29 '18 at 02:10
  • @Conifold You could argue that all humans are equally valuable and that this value is infinite then you'd have a situation where 1infinity = 5infinity. And as in these framework both equal you might as well flip a coin and give it to chance. – haxor789 Jun 16 '22 at 09:28
16

From an existentialist point of view, this strategy wrongly places a human decision in the hands of an effectively random, physically determined process. Existentially speaking, the decider still bears full and undiminishable responsibility for the final choice. The intermediary of the coin is the decider's attempt to deny this to himself, as further disguised by recourse to an odd and seemingly unworkable mechanistic notion of justice.

So in the larger picture, the crisis here is the illegitimate abrogation of the burden of human judgment through deferral to a mechanical process or algorithm. There's a lot of relevance here, both looking backwards, to the entire question of the rule of law, and forwards towards the increasing likelihood of being judged morally by computerized justice.

Chris Sunami
  • 29,852
  • 2
  • 49
  • 101
  • 1
    of the coin is the decider's attempt to deny this (responsibility) Likely so, but not necessarily. Imagine a nihilist, and hardly any information on the six people available. Then his/her resort to chance can be a personal, authentic solution without any flight from responsibility. He/she could, for example, think of a russian roulette session doing for those people. – ttnphns Apr 25 '18 at 17:37
  • @ttnphns Nihilism is not an ethical framework, but rather the lack thereof. – Chris Sunami Apr 25 '18 at 17:46
  • Chris, a nihilist will have problems with working out or supporting values. Any value can be a point for moral decision. – ttnphns Apr 25 '18 at 17:51
8

EDIT:
I would like to note that nowhere in the original post does it posit that the moral agent in question (in this case, the Kantian Deontologist) is only able to pick one of these two choices (flip or don't flip the coin). The question isn't: either you do flip a coin to determine the death of the 1/5, or you don't and they all die. If this was the question, then my answer would be very different. The question is instead: the best possible way of choosing who should die when faced with a decision between x and y (where the only difference between x and y is, according to the knowledge available to us, the number of potential victims) is to flip a coin. Yes, under a Kantian framework, we might have a moral requirement to do something - but I don't think it would be to do this.

Original Answer:
In response to something Conifold wrote, I will first say that I do not think the Deontologist would automatically choose to do nothing when presented with this issue. A Kantian Deontologist might have certain moral duties, but to willfully choose to have a coin-toss be the decisive factor in the life or death-sentence of 1-to-5 people goes against the first Categorical Imperative (and the Second, in my opinion): "Act only on that maxim whereby thou canst at the same time will that it should become a universal law" (Fundamental Principles of the Metaphysics of Morals, Section 2).

Imagine the consequences that would result if, whenever we were presented with moral issues concerning life or death, such matters were universally decided upon with a mere coin-toss? Under Deontology, it is a contradiction for moral agents (with genuine powers of will and critical thinking) to make such decisions on the basis of luck alone. Similarly, consider that this Categorical Imperative is usually interpreted as being akin to the golden rule: treat others as you wish to be treated. Suffice to say, I think we can agree that we would not want people to judge the worth of our lives based on a mere flip of the coin.

In the abstract to the work, "Kantian Ethics and Economics: Autonomy, Dignity, and Character", Mark White wrote that the "key aspects of Kant's moral theory ... [include] autonomy, judgment, dignity, perfect and imperfect duty, and the categorical imperative"; note the emphasis on the rational faculties of autonomy, judgement, and the like. I don't think you need to defer to Utilitarianism to reject Taurek's claim. I think you can merely defer to the definition of what ethics is supposed to be about (under a Kantian Deontologist's interpretation, at the very least). A coin toss leaves our moral choices and actions entirely to chance, stripping us of the need for critical thinking, compassion, rationality, and ethical debate - things that I believe are crucial to the foundations of our moral decision making.

xxWallflower
  • 163
  • 6
  • 2
    I disagree that it violates the first categorical imperative. The issue here is that not flipping a coin will not give some people a chance to survive (if the operator refuses to act, the initial group is condemned. If he does act based on number of lives saved, the minority is condemned). The question here is more "do you want your fate to be decided (which includes decisive death for some), or do you want to leave it up to chance? The former is only objectively better if saving everyone is a possible outcome, which, for the trolley problem, it is not. – Flater Apr 26 '18 at 07:39
  • 2
    So I could similarly argue that not flipping that coin violates the first categorical imperative. No one would want to be condemned to death by a third party, and therefore no one should act in a way that condemns anyone else to die (including through willful inaction). Flipping a coin removes certain condemnation, essentially giving the otherwise condemned party (whoever it is) an increased 50% chance to survive. – Flater Apr 26 '18 at 07:41
  • 1
    This can swing either way. Unless we have a reasonable idea about what someone would choose (leaving their survival up to chance or arbitration), we cannot actually evaluate which option would be picked by anyone other than ourselves (and that's even assuming everyone is able to pick for themselves). – Flater Apr 26 '18 at 07:45
  • 1
    I think that especially in a Kantian framework, you should be careful to distinguish moral and ethical decision. And in my understanding, the moral decision would indeed be to do something, whereas we cannot and should not discuss the question of whom to save within morality. There is no moral decision here. For Kant there are no moral dilemmas, remember (Ak. 6:224)? This is a question of ethics. And I am actually quite sympathetic towards saying that externalising responsibility while at the same time giving every person potentially saved equal chances is quite a good ethical thing to do. – Philip Klöcking Apr 26 '18 at 20:44
  • Great comments here. I've amended my answer to clarify what I was attempting to say, but perhaps you will both still see my interpretation of the Kantian Deontologist's decision differently. – xxWallflower Apr 27 '18 at 02:09
5

If you are in group A (one of the group of 5) you have the same chance of survival as group B (the group of one). That is the logic. Of course, the utilitarianism part can come into debate, but it's not related to the chance itself. It should be clear that each individual has a 50-50 chance if there are 2 groups and a coin; it is irrelevant how many are in each group.

-Later Edit-

The ethical part in this would actually be if to toss the coin. Because if you do, you may choose the 5-group unwillingly. But is that worse than choosing the 1-group ? There can be situations where 1 must be saved instead of 5, although most would choose to save the 5. But if you make a choice to save the 5, because they are more lives, where do you draw the line ? Will you terminate 999998 to save 999999 ? Such things cannot be put into math, there can be way too many factors involved in such decision.

Overmind
  • 674
  • 3
  • 8
  • As I see it, the question is asking "How can I argue that Taurek's method is not the most reasonable, without appealing to utilitarianism?" not "How can I argue that Taurek's claim his scheme gives each person a 50-50 chance of dying?" – David Richerby Apr 25 '18 at 18:30
  • @Overmind I don't think it's so simple, without some distinguishing parameter on the participants I believe the coin toss is not relevant. See my answer for the reasoning. – Clumsy cat Apr 25 '18 at 21:50
  • Check my edit, a non-math perspective. – Overmind Apr 26 '18 at 08:32
5

This question is strongly related to the current debate about autonomous driving. When a crash is unavoidable, how can/should the car's computer decide what it should crash into, the group of five to the left or the single person to the right?

The answer is more or less obvious: It can't make an ethical decision.

Why is that? Simply because the car's computer has no information about the individuals it has to decide over.

And I think that is the point the original statement makes: When you have no information about the members of the two groups you cannot make an ethical decision. Hence, you should not make the decision and can only randomly pick one alternative.

Extreme example: The larger group may be a chain gang of convicted serial killers working at the side of the road, and the smaller other group may be elementary school kids waiting for their bus. If you know this, your decision may be different.

More mathematically speaking, you cannot know the probability for individuals to belong to one group or the other ("how they got there"). Thus the coin flip (50% chance) is fair in that it extends the previous (relative) probability unchanged to the probability of survival. If an individual had a 90% chance to find himself in group A, and 10% for group B, then after the coin flip it will be 45% (90% x 50%) vs. 5% (10% x 50%) overall probability to get killed. The 9:1 ratio is maintained.

Of course, if you accept the "no information = no ethical decision" conclusion this implies that you should try to acquire relevant information. ("Look, people in group A all wear orange suits and are chained together.") However, you can never acquire all information about the past of the individuals, or their ethical 'value'. And you cannot even know if you received enough information yet. Hence, how can you be sure to make the ethically right decision?

JimmyB
  • 159
  • 4
  • 3
    But you in fact have an information: the size of the group. And the probability that the school kid is in the larger group is also larger. – Thern Apr 25 '18 at 12:36
  • Absolutely! In this, the original question is already an extension of: What if you have two groups of individuals, A and B. Which group should be done harm to? - You can't tell without knowing anything about the groups. Next level of information is you know that group A is larger. What is the right decision? Next level is you know that group A wears orange suits. What to do? - So the question is, what is enough information? Or, specifically, can the pure number of individuals be enough information? – JimmyB Apr 25 '18 at 12:42
  • 2
    @Thern "the probability that the school kid is in the larger group is also larger." - Why would that be? - You cannot state that without further assumptions or information. – JimmyB Apr 25 '18 at 12:44
  • 7
    Autonomous cars do not make decisions. They implement the decisions made by their designers. –  Apr 25 '18 at 13:04
  • 3
    When the size of the group approaches 7 billion, the probability that it contains school kids is approaching 1. This makes clear that the probability must increase with the size. Or view it from this point: If there is a school kid, and I don't know nothing else about the groups, the probability that it is in group A is A/(A+B). – Thern Apr 25 '18 at 13:05
  • I would not say that the size of the group is enough information. But it is information. You can't state that you know nothing about the groups and therefore must flip a coin. – Thern Apr 25 '18 at 13:06
  • With autonomous driving, there is another wrinkle, which is a scenario where the algorithm makes a choice between killing a group or killing the passenger in the car. Say, by crashing into a wall to avoid a group of children. Can the algorithm's designer ethically make this choice for the passenger? if so, do they have an ethical obligation to inform prospective owners that, should the car detect a choice between killing more than one person and allowing the passenger to possibly die, it will choose to kill them? – Dan Bryant Apr 25 '18 at 15:26
  • @Thern Knowing that there must be a school kid in group A or B is significant information. Killing 7 billion people is certain to kill at least one school kid, but is as well certain to kill a couple of uncaught serial killers. What do you do? - We're all making assumtions about the world around us all the time based on more or less information. But in the theoretical scenario we don't have any information apart from group size, no way to know if there's kids or serial killers in any or both groups. – JimmyB Apr 25 '18 at 16:57
  • @PeterJ I don't think we can make that general statement. Software systems (AI's even more so) are not deterministic in such a way that their designers can foresee any and all possible reactions to any and all possible inputs. They set bounds and parameters, but what the system does depends on the combination of all input parameters at run time which yields almost infinitely many possible states in a multidimensional space. It's not as simple as coding "if you see n people on one side and less people on the other side, steer towards the other side." – JimmyB Apr 25 '18 at 17:13
  • 1
    "When you have no information about the members of the two groups you cannot make an ethical decision." The decision can be very hard even when you have that information - for an autonomous car, or for a human. There's an MIT experiment about this. http://moralmachine.mit.edu/ – molnarm Apr 26 '18 at 05:28
  • @JimmyB - I take your point but it doesn't seem to change anything. The unpredictability of the behaviour of the system is a direct result of its programming, nothing else. It is built in to the system by the designers. –  Apr 26 '18 at 11:32
  • @PeterJ But it is not designed to be unpredictable, but just inherently too complex to be predictable; just as the situations the car may at one time run into. This begins with the categories the system is made to deal with: To the car, there are no people, dogs, kittens, babies, murderers. There are only obstacles, and the car can only deal with obstacles. This abstraction has to be in place, or the car might fail to react properly to an elefant because the designers did not implement elefant avoidance logic. – JimmyB Apr 26 '18 at 15:06
  • But surely we could question one's right to make judgements about which people are "more deserving" to live than others. I'd probably agree that if I had a choice between saving the life of someone who has devoted her life to helping the poor and unfortunate or saving the life of an escaped serial killer, I'd choose (a). But I've heard plenty of discussions along the lines of, "obviously" we should save the brilliant college professor rather than the mentally retarded person, because the professor "contributes more" to society, etc. – Jay Apr 26 '18 at 19:04
  • @JimmyB The car is not a moral agent. It doesn't make decisions. The people who designed it make decisions. Granted they may not fully understand the implications of all their decisions. But that's not unique to computer engineers. We all face that problem all the time. No one ever has 100% complete information when he makes a decision, except in hypothetical textbook problems. – Jay Apr 26 '18 at 19:07
  • 1
    By the way, in Germany there was a Federal Constitutional Court ruling that an aircraft hijacked by terrorist may not be shot down if there are innocent people on board, even if the terrorists intend to crash it into, for instance, a stadium full of people. Thus, the highest court has confirmed that every single life must be protected and that 10000 human lives are not of more value than 100. – JimmyB Apr 26 '18 at 20:07
5

Mathematically the assumption that "everyone has an equal chance at surviving" swings on how the groups where formed. Let us call them groups A and B.

Say people are indistinguishable, and they are picked from pool of 6 and assigned randomly with uniform probability to groups A and B. No matter which group you kill everyone has an equal chance at surviving because they had they had equal chances to end up in either group in the first place. In this version the coin toss is a red herring. No matter how we pick the group to be killed everyone has equal chances of surviving.

Now let us consider the alternative, people have names and are distinguishable. The probability of a person being assigned to group B is proportional to the log of the length of their name. So the assignment is still random, but the probability is not uniform. Now tossing a coin gives everyone an equal survival rate, when choosing based on group letter would not.

So the conclusion is that the problem is not well enough defined mathematically. If you have indistinguishable people then no need to flip a coin at all, they all have equal chance no matter what you do. If people are distinguishable, then there is by definition another parameter available for the choice to be made on. Without knowing what that parameter is it is not possible to say if it should effect our decision and determine who should be saved.

Edit: To put this in the practical context of a self driving car, I think we can be fairly certain that there will be no random number generators (coins) in the software of a self driving car. If the programmer is trying to account for a situation were more that one party is in peril they will almost certainly make use of inequality symbols. Given the architecture dependency of of floating point arithmetic it is likely that even the programmer will not know what combination of inputs would lead to two perfectly equal chances. They don't worry about it because these numbers have so many significant figures that it will be vanishingly rare.

Edit 2: @supercat points out in the comments that my knowledge of algorithms is a bit lacking. There may well be randomness in some algorithms used to process data. It is still likely that actual decisions would be based in floating point comparisons though.

Clumsy cat
  • 392
  • 2
  • 10
  • 2
    I'm not sure why you assume the software would have no deliberate randomness. Many algorithms require making largely-arbitrary choices that are unlikely to matter unless made in certain combinations. If such choices are made in independent random fashion, the probability of a deadly combination may be made arbitrarily low. If the choices are not independent, however, the probability of a deadly combination may be much higher. While I doubt deliberate randomness would be invoked in a high-level decision scenario, I would expect it to play a role at lower levels. – supercat Apr 25 '18 at 20:35
  • @supercat, can you give an example of an algorithm that is worked this way? (not doubting you, just it sounds interesting) – Clumsy cat Apr 25 '18 at 20:41
  • 1
    A couple of simple commonplace examples: 1. On communications media (radio, half-duplex Ethernet, etc.), simultaneous attempts by multiple devices to send a message will often result in neither message getting through; this is handled by having devices wait a random amount of time before retransmission. If delays are chosen randomly, the probability of 16 consecutive collisions would be quite small. If two devices would pick the same sequence of 16 delays, however, the probability that a collision would be followed by 15 more would be much higher. – supercat Apr 25 '18 at 21:04
  • 1
  • In Hoare's "Quicksort" algorithm (see https://en.wikipedia.org/wiki/Quicksort) which was invented in 1959 but still widely used today, the worst-case execution time may be many orders of magnitude larger than the average execution time, but if pivot elements are chosen randomly the probability of execution time being more than twice average will be very small. If pivots are only affected by the sequence of items, however, with no chance factors, it may be hard to prove that no possible (perhaps contrived) sequence of items would yield performance that's orders of magnitude worse.
  • – supercat Apr 25 '18 at 21:08
  • Note that in both of these situations, the random generators are used to make decisions where the vast majority of possible choices are almost equally good, and where even making mostly bad choices would be acceptable, provided only that the code picks a good choice at least occasionally. – supercat Apr 25 '18 at 21:15
  • Your alternative is not relevant. It was not a parameter in the question (groups selection). – Overmind Apr 26 '18 at 08:11
  • @Overmind I don't see how one selection method can be more relevant than the other? The question doesn’t specify how groups get selected, but the selection method is defiantly required to determine the way the probabilities behave. – Clumsy cat Apr 26 '18 at 09:57