The trolley dilemma is summed in two parts as follows. Suppose that a trolley is running down a hill at a fast speed, heading towards five people at the bottom of the street. When it reaches them it will surely kill all of them. You notice that there is a switch next to you that could direct the trolley to a side path where there is one man standing and once you do, it will be the one man that dies. Would you do it?


Most people would answer this question with an affirmative. Let us call this the switch scenario. The second scenario is that a trolley is again running down a hill at fast speed, aimed at five people at the bottom which it will surely kill. However this time you are standing on a bridge with a fat man next to you. If you push the fat man off the bridge the trolley will stop but kill that fat man. Would you do it?

The common answer here is no. This is somewhat strange as the consequences of the actions are the same. Moreover there is no easy way to justify why in the lonesome man in the switch scenario is any less innocent or involved than the fat man. Nor is there any increase or decrease in your involvement. In both situations the person was at the wrong place at the wrong time and in both situations you are actively deciding whom to kill.

There is however, as I believe, a problem with the scenarios that distinguish them. First, let me begin by saying that morality is in our nature, in the sense that it is somehow an evolutionary trait that we inherit. Whether or not this is in our DNA or a social trait is somewhat irrelevant to the discussion but if we did not posses some common moral code then our species would be extinct. Thus as such, I find it hard to believe that rationality has a large part to play.

There is also the problem of the moral code. Most people mistakenly think that there are absolute -in the sense of personal absolute, so that a person will say any act contradicting these is immoral for them- axioms of morals. For example one common one that a person may hold is that “Thou shalt not kill”. The problem with this is twofold. First, clearly there are cases when most people would consider it a moral act to kill, for example, a soldier killing the guards at a concentration camp to free the prisoners. Secondly, when there are more than one of these axioms, they tend to contradict each other. Take for a second axiom that “One should reduce suffering” which will contradict each other in cases of terminally ill suffering patients. Thus morality by it’s very nature considers the situation at hand.

The problem with the scenarios is then the following. The morality we receive is granted by, more or less, intuition. The first scenario is one that is imaginable, for example one may think of a pilot trying to decide where to crash to plane as to save as many people on the ground as possible. The second however is not. There are an array of other possible alternatives and an array of uncertainty surrounding this. First, unlike the first scenario where one can imagine that the switch would change the track of the trolley, there is no guarantee that the fat man will stop the trolley in real life. Secondly, in the second scenario the question to be asked is why the fat man and not us? Is there any guarantee that if I jumped in front of the trolley it wouldn’t have stopped?

My point here isn’t that the scenarios are not posed properly but that the second scenario is unrealistic and that as our morality is governed by the scenarios we can experience (and those we have) our answer to the second question seems to contradict the values described by the answer to the first. We are unconsciously trying to relate the situation with a more realistic one we may have encountered, and as such the questions that are raised, though they are ruled out by the scenario, still affect our judgment on the morality of the action.

About these ads