One reason for thinking that consequentialism must be correct is that otherwise it might be morally wrong to make the world a better place, and that's just absurd. To bring out the force of this argument, consider the old 'Organ Harvest' scenario, where a doctor could kill an innocent passerby so as to transplant his organs into five needy patients (one who needs a heart, two needing kidneys, etc.), and thereby save their lives. Intuitively, we think it must be wrong for the doctor to kill, no matter the beneficial consequences. But can this intuition survive rational reflection?
Let's remove homicide from the picture, and consider two alternative possible worlds. In the first world, the five patients die, and the passerby goes on to live a happy life. In the second world, the passerby's head falls off (by brute natural chance) just as he's walking past the doctor's office. The doctor then uses the man's organs to save the five patients, who each go on to live happy lives. Further suppose that all else is equal - there are no other relevant differences between the two possible worlds. Which world is better? Surely, based on the descriptions provided, we must judge that world #2 is the better one.
Now suppose that God lets you choose which of the two worlds to actualize. (After making your decision, the divine encounter will be wiped from your memory.) You get two buttons. If you press the first button, the real world will be world #1, and the passerby will live. If you press the second button, the real world will be world #2, and the five patients will live instead. Which button should you press? Again, it seems obvious that you should press button #2. Given the option, we should choose to make the world a better place rather than a worse one.
Let's elaborate on how it is that the passerby's head happens to fall off (in the second world). It turns out an invisibly thin razor-sharp wire had been blown by a freak wind which fixed its position at neck height where the man was walking past. (No-one else was hurt and the wire soon untangled itself and blew away harmlessly into the nearest dumpster.) This presumably will not alter the moral status of any of our above judgments.
Now suppose that, instead of two buttons, God gives you a length of razor-sharp wire with which you can make your decision. By putting it straight in the dumpster, you will realize world #1. By fixing it in the appropriate place, you will realize world #2. Again, God will later wipe all memory of your decision and actions. What should you do? The situation seems morally equivalent to the previous one. There don't seem any relevant grounds for changing your choice. Thus the right thing to do, in this bizarrely contrived scenario, is kill the passerby.
So why shouldn't doctors go around killing people? Why, it would have bad consequences, of course! In the real world we are never perfectly rational and fully informed, so we can easily make mistakes when attempting utility calculations. There's the risk that you would be found out, which could result in widespread panic and a "climate of fear" that would make life very unpleasant for everyone in the society. Moreover, the action itself would probably warp your psychology, and without God to reset your brain, it might dispose you to commit wrong (utility-thwarting) acts in future.
So the utilitarian can agree that homicidal doctors are almost always acting wrongly. But in any (bizarrely unrealistic) scenario where it would actually do more good than harm, we must judge that it would indeed be right. (Given that such situations would never arise in real life, this judgment has no practical import.) Though this initially seems counter-intuitive, further reflection can support the judgment. The real absurdity is not the utilitarian conclusion, but the bizarre scenario that critics of utilitarianism are appealing to here.
Richard:
ReplyDeleteIf we take seriously the notion of making the world better, we cannot confine ourselves to consequences. For an action genuinely to make the world better, by 'world' we have to mean something that includes the action itself. So for most non-consequentialists the 'world' we choose would have to be understood as including not merely the scenario we choose but also the choosing of the scenario itself. And for a non-consequentialist it would make perfect sense to ask the question, "Even considered independently of any consequences of this choosing, is this choosing itself a world-worsening constitutent of the world?"
Thus, e.g., why shouldn't doctors go around with homocidal intent? -- yes, it would have bad consequences (and thus is prudentially to be avoided), but more importantly because, ceteris paribus, a world with doctors of homocidal intent is itself a worse world, and this (most non-consequentialists will say) we know without even having to look at the consequences of this intent to see what they are. Indeed, it seems to me a non-consequentialist would usually say that we would know that a world with doctors of homocidal intent is itself a worse world than a world without them, even if it turned out (to our very great surprise) that having doctors of homocidal intent resulted in uniformly good consequences. So, I'd think, your scenarios could perhaps be spun in favor of a non-consequentialist position.
Brandon, that's a good point about how the decision/action itself is part of the world. But where in my sequence of scenarios would you draw the line? I'm guessing you'll have to do it between the button and razor-wire scenarios. Surely you agree that it is right to press button #2. But how then do you justify not laying out the razor-wire? What morally-relevant difference is there?
ReplyDeleteOne reason for thinking that consequentialism must be correct is that otherwise it might be morally wrong to make the world a better place, and that's just absurd.
ReplyDeleteDenying consequentialism doesn’t make it morally wrong to make the world a better place, it just doesn’t make it morally obligatory
Consequentialism says that in order to do good you must do whatever is within your power to continually make the world a better place, and if you are not then you are doing wrong, you should not be considered a ‘good person’.
Richard,
ReplyDeleteSorry to get back to this so late; I've been busy finishing and presenting a paper for a conference.
I'm not sure I entirely understand your question. For most non-consequentialists, I take it, the major (although not necessarily the only) consideration would be the intent involved in the choice itself. Thus I suspect most non-consequentialists would be inclined to accept any cut-off point; in all the scenarios there is something wrong with choosing the second option. That doesn't mean, of course, that they are all equally wrong. In your first alterantive worlds scenario, for instance, someone could say that we are caught in a lose-lose situation; what we are doing is not making the right action but making the least wrong one. It seems to be least wrong because (1) the scenario makes our responsibility for the death indirect; (2)in either alternative, someone will die as a result of our decision; and (3) a non-consequentialist can allow consequentialist considerations as mitigating factors (some non-consequentialists are absolutely non-consequentialists; but others are only non-consequentialists in the sense that they, unlike consequentialists, think that consequentialism is OK as far as it goes, but leaves out something essential). So, since in all the cases it is at least somewhat wrong to choose the option involving someone's death, there doesn't need to be a cut-off point. When we are talking about 'right' in a case like your first scenario, what we really mean is 'least problematic to prefer'. But it would seem that this is a different sort of rightness than the rightness of (say) helping orphaned children find good homes.
And isn't this, in a sense, the non-consequentialist's whole point, i.e., that the consequentialist is too tempted to treat 'least problematic to prefer' as 'right' (properly speaking)?
quote: Let's remove homicide from the picture, and consider two alternative possible worlds. In the first world, the five patients die, and the passerby goes on to live a happy life. In the second world, the passerby's head falls off (by brute natural chance) just as he's walking past the doctor's office. The doctor then uses the man's organs to save the five patients, who each go on to live happy lives. Further suppose that all else is equal - there are no other relevant differences between the two possible worlds. Which world is better? Surely, based on the descriptions provided, we must judge that world #2 is the better one.'
ReplyDeleteI don't think that your example (even with the razor blade) is analogous to the example of the doctor killing a patient. Therefore it proves nothing. (The whole argument proves nothing.)
It's funny you seem to realize it. You write: 'Let's remove homicide from the picture, and consider two alternative possible worlds.'
But if you remove homicide from the picture, how can be what you say relevant to the discussion? The problem with consequentialist theories that don't include the intrinsic badness of certain sort of acts (irrespective of consequences) is exactly that they end up justifying homicides in very peculiar conditions.
Or you may want to say that you still are dealing with homicide. Let us assume for the sake of the argument that "making it the case that a possible world P becomes the actual world" where P contains a death, is equal to killing a person.
Yet, the case in which you are the god is a completely different sort of case from the one concerning the doctor. In the god case, you have to choose between killing 1 person or killing 5, and of course if you are sane you will choose the former (which is not to say that you will not have had any sort of regret... I guess our attitude towards those scenarios is desiring not to be the one to make the choice.) The doctor does not have to choose between killing a person and killing 5: he has to choose between killing a person and letting 5 die, which is an entirely different thing. (or at least, the burden is on you to show that it is not an entirely different thing)
Ciao!
Michele
Michele, you seem to be missing the point of my post. The first case is clearly very different from homicide. But then I introduce incremental changes, none of which seem morally relevant, yet we end up with a case of full-blown homicide (i.e. intentionally fixing a piece of razor wire so as to decapitate someone!). The burden is to explain why the latter case is wrong if the former one isn't. What's the relevant difference? (Put another way, my series of cases just show that there's no great significance to the killing / letting die distinction.)
ReplyDeleteRichard,
ReplyDeleteWhat you are saying is that the basic utilitarian argument for doctors killing passers-by for organ harvesting purposes fails on the premise that the negative externalities of such an action outweigh the benefits. This is an argument that I often pose to my siblings as well. Of course, then we have the problem that the potential for externalities is infinite since no one acts in a closed system (save for the earth itself). Anyway, I guess if we want to apply utilitarianism on the second level, we have to be weary of how we can draw lines on what externalities to consider. Are we to use common sense? Are we to consider only the first-order externalities that we can concieve of?
On a side note, as a counter-example for the doctors-killing-strangers scenario, I usually pose the scenario of the runaway train: A runaway train is going to hit 5 people on a railroad track. At the same time, a thin man is passing over the track on a foot bridge, while a fat man is crossing from the opposite direction. The thin man is certain that he can only stop the train and save the 5 lives if he push the fat man into the path of the train, thus killing him. Would he be right to do it?
Many people I ask would say yes. The reason for this (I believe, while few people can articulate it) is that this scenario is so fantastic, that no one could see any probable negative externalities rising from the single event. Will fat people fear to cross foot-paths over train tracks? This seems far less likely then people fearing walking near (or checking into) hospitals for fear of regularized kidnap and organ harvesting.
So there does seem to be a capacity for applying common-sense externalities to similar scenarios and coming up with different outcomes. The problem is how stark these two examples are. I'm wondering if any sort of real-world application of the principal might become so convoluted as to allow any position to be argued, and thus offer no concrete solution, which is supposed to be the ultimate benefit of utilitarianism in the first place.