ACTION-PREFERENCE NEXUS: Among the actions available to you, you should perform one of those whose consequences you should prefer to all the rest.
AGENT-NEUTRALITY: Which consequences you should prefer is fixed by descriptions of consequences that make no indexical reference to you.
I think this is both too strong and too weak. It is too strong because consequentialism doesn't require agent-neutrality. Egoism is clearly consequentialist in nature, as are other forms of agent-relative welfarism (e.g. views that are utilitarian at base, but then weaken the strict equality of interests to instead allow agents to weight the interests of their nearest and dearest more heavily than those of strangers). As Setiya acknowledges, this is a "terminological" point. Nonetheless, some ways of using terms do a better job than others of carving nature at the joints, and if we're looking for a fundamental structural divide with which to categorize all normative ethical theories, you've made a real mistake if you don't recognize that egoism, utilitarianism, and agent-relative welfarism all belong on the same side.
On the other hand, Setiya's definition is too weak because the Action-Preference Nexus is (as he himself notes) "not a claim about the order of explanation but about the congruence of reasons". And such a congruence seems difficult to deny -- even many self-identified deontologists [though not all!] will surely say that we should prefer not to act wrongly, for example. But so long as the explanation of the congruence takes some reasons for action to be prior to reasons for desire (i.e. takes the right to be prior to the good), then the view in question would seem best categorized as "deontological" in nature, not "consequentialist". This seems so even if the view in question is also agent-neutral, as is the view Setiya defends in his paper. You can (albeit with some difficulty) hold that everyone ought to prefer that agents not engage in utilitarian sacrifice, or kill one to prevent more killings. But insofar as these are moral side-constraints on action that you are elevating to the status of universal desirability, the resulting view seems all the more "deontological" in nature.
One way to see this is to invoke my 'naturalization' test for axiological vs deontic reasons. Suppose it were not an agent, but rather a bolt of lightning, that killed the one and thereby saved the five. How would the resulting state of affairs compare to the alternative where the five were killed? Presumably it's less bad: fewer people died, and nobody was treated as a means, had their rights violated, or was otherwise "wronged" in any way.
Since it makes such a difference to our (commonsense) moral assessment whether the one is killed by an agent or by natural causes, it seems that the injection of agency into the picture is responsible for flipping our verdicts about desirability in the original "killing one to save five" case -- suggesting that we only prefer that the agent not kill the one because of an antecedent judgment that the act would be morally wrong (rather than judging it wrong because it brings about an antecedently undesirable outcome, as a consequentialist version of the view would have it).
[By contrast, consider how a properly consequentialist version of Setiya's view would go. For "killing one to save five" to be antecedently undesirable (i.e. undesirable independently of any deontic reasons that stem from the action's putative wrongness), it would seem that the undesirability must be found in the general causal structure of a situation in which five are saved only by means of the death of an innocent person. This causal structure is held in common whether the innocent is killed by a moral agent or by a lightning bolt. So the Setiya-consequentalist should presumably also judge that it is preferable that five innocents die than that they be saved by a bolt of lightning striking down one other innocent. (For concreteness, suppose that the lightning bolt blows a bystander off a bridge and in front of an oncoming trolley, activating the emergency brakes and stopping it before it reaches the five innocents stuck on the tracks.) But such a verdict doesn't seem remotely substantively plausible, as noted above: If the lightning strikes then fewer people die, and nobody is wronged, so how could it possibly be worse? It just seems bizarre to imbue the causal structure in question with such immense moral significance, compared to more obviously important things like people's lives.]
Two upshots:
(1) Consequentialism is orthogonal to agent-neutrality. You can have agent-neutral or agent-relative forms of consequentialism, and you can have agent-neutral or agent-relative forms of deontology.
(2) The Consequentialism / Deontology distinction is best understood in terms of explanatory priority. As the traditional question goes: Is the Good prior to the Right? Or, in other terms, is what we should do determined by what we should prefer? Consequentialists say 'yes': the justification of action is wholly downstream of the evaluation of outcomes. Deontologists deny this, introducing side-constraints on action that are to some extent independent of outcome-evaluations. And agent-neutral deontologists may further add that at least some evaluations (e.g. of states of affairs involving wrong actions) depend upon prior facts about what's right or wrong.
It thus seems to me that Setiya's "commonsense" view about the ethics of killing is best understood as a form of agent-neutral deontology. It doesn't really belong in the same camp as utilitarianism, egoism, etc.
There's a kind of false dichotomy in (2). That is, if we agree that what we should do depends on what we prefer, side constraints will still come up, because we cannot do certain things without being certain things, and "being certain things" affects the attainment of what we prefer. For example, in Newcomb's problem, you prefer to get the contents of both boxes to getting the contents of only one of them. But you also prefer to get more rather than get less, and if you take both you will get less, because you will not only be doing something, but being something, namely a two-boxer. So "take only one box" ends up being a side constraint that follows even for the consequentalist.
ReplyDeleteAnother way of saying this is that consequentialists and deontologists are either both right, or both wrong, depending on how they interpret their theories.
Newcomb cases are always fun, but insofar as the motivation for the "side-constraint" in question is wholly instrumental to bringing about better outcomes, it is not "independent of outcome-evaluations" in the way that I characterized deontological side-constraints as being. It's just a kind of indirect consequentialism.
DeleteA few thoughts about this (mostly in disagreement)...
ReplyDeleteI like Shelly Kagan's distinction between factoral and foundational consequentialism. Factoral consequentialism is "the theory that goodness of results is the only directly relevant factor in determining the status of an act". Foundational consequentialism is "the view that whatever the genuinely relevant normative factors may be, the ground of their relevance ultimately lies in their connection to the promotion of the overall good". [Quoting from "The Structure of Normative Ethics"]
Both seem to me to capture some of the received wisdom about what consequentialism is, but as Kagan points out they are independent of each other. Your (2) looks like it captures something like foundational consequentialism (though I'm not totally happy with the formulation in terms of what we should prefer).
As for agent-neutrality, I tend to think that it is implied by foundational consequentialism at least, and perhaps by factoral consequentialism as well. This is because I can make better sense of agent-relative reasons then agent-relative value. If agent-relative value cannot play a foundational role, then an agent-relative theory must be foundationally non-consequentialist. If agent-relative value makes no sense at all, then an agent-relative theory must be factorally non-consequentialist too. It does not seem to me that egoism is intuitively grouped with utilitarianism (maybe the idea that it is comes from reading too much Sidgwick!). Suppose, for example, that someone explains their (ethical) egoism by insisting that each person only has duties to themselves - there is no assumption that they reduce such duties to maximising their own well-being. In contrast, the kind of (rational) egoism that simply talks in terms of maximising one's expected utility is eliminativist about moral reasons and so not akin to any kind of moral theory.
A final point: you several times gloss the idea of the right being prior to the good as the idea that (some) reasons for action are prior to reasons for desire. This seems wrong to me. I think that the right is prior to the good, but I do not think that any reasons for action are prior to reasons for desire. I think your way of carving things here misses the Kantian view (at least on the constructivist interpretation), according to which we have reasons for selecting maxims (which are reasons for desire in the broad sense) that are not dependent on the values that we attach to states of affairs. This Kantian view is clearly foundationally non-consequentialist, because it does not take the good to be foundational, but Kant wants to derive reasons for action from reasons for desire. I think that you are assuming that the only reasons for desire come either from what is valuable or from reasons for action, but these options are not exhaustive.
Hi Daniel, thanks, I'll have to think more about Kagan's distinction -- it does seem helpful. (I take it that Rule Consequentialism, for example, is to be understood in these terms as foundationally but not factorally consequentialist?)
DeleteI think much of your disagreement stems from a couple of points on which I agree with Setiya:
(1) Being somewhat skeptical of a distinctively 'moral' domain, and preferring to instead just talk about (all-things-considered) normative practical theories. (Hence my following Sidgwick in seeing rational egoism and utilitarianism as closely related rivals.)
(2) Starting from something like fitting attitudes (desirability/preferability) rather than any independent notion of value. (One advantage of this is that it makes much better sense of agent-relative welfarism, which strikes as a rather compelling view.)
Do you think you could get on board with my divide if instead of the traditional terminology, I was to speak of 'teleology' instead of 'consequentialism'? The point is that it's fundamentally goal-directed, in contrast to deontological views on which one's proper aims should be tempered by an independent understanding of right action. Whether you're willing to use the term 'value' for the right goals strikes me as less important than this fundamental structural divide.
Hi Richard,
ReplyDeleteRegarding your naturalization test, you say in that post that "Once consequentialists build the intrinsic disvalue of vicious action into their axiology, we need a more sophisticated test to distinguish them from deontologists (and hence to distinguish genuinely deontological intuitions from mere axiology-refining intuitions). At this point we may turn to agent-neutrality...", etc. However, in the present post you reject agent-neutrality as a way of distinguishing deontology from consequentialism. But then, I'm not sure how you propose we distinguish deontology from consequentialism, if the latter is a form of consequentialism that builds the intrinsic disvalue of vicious action into their axiology. Furthermore, I'm not sure how to construe "vicious" action if not as "immoral" action (or, "morally wrong", or whatever one calls them). So, I'd like to ask for clarification on those points.
Regarding the lightning example in your original test, I actually do think there is an immoral action: namely, I think it's immoral to enjoy the fact that a Black guy was struck by lightning, with the potential (but I'd say not very likely) exception of the first moments of enjoyment if they can't control that. But even if the initial enjoyment is not immoral, it seems to me that the failure to take actions to no longer enjoy it, is immoral, so even if there is no immoral action, there is an immoral failure to act.
Granted, it's still a bad outcome, because the Black guy was killed by lightning. But what if he was not, but they believe he was?
For example, let's say that in the "KKK world" scenario, some people are making a movie, and for the movie, they have realistic-looking dolls. It turns out that lightning strikes one of them, and someone accidentally catches the strike (but not what happens later) from a distance with a cell phone camera, from a fast-moving vehicle. He later posts the video online without knowing the strike was caught on camera, and it looks pretty realistic to most viewers if not all people who watch the video, who are also White and racist. It turns out that hundreds of racists end up deriving pleasure from it, when they see the footage. Their behavior seems just as immoral to me as if the victim were an actual person. But there is no actual victim, and certainly the event "lightning hits a prop that looks like a Black guy" is not nearly as bad as the event "lightning his and kills a person", regardless of the person's race. We may even add - though it's unnecessary - that the damage to the prop has negligible impact on a massive movie company.
Hi Angra,
Delete(1) By 'vicious' I mean having pro-attitudes towards intrinsic bads, e.g. sadistic desires are paradigmatically vicious in this sense. (Hurka offers a recursive consequentialist account of virtue and vice along these lines, if you're interested.) Agents can exhibit this vice, taking pleasure in what they believe to be another's suffering, even if their beliefs are actually false (as in your thought experiment).
(2) Right, I no longer think agent-neutrality is the right term for what I had in mind back then. It's instead more a matter of whether you seek to minimize the bad in question or instead treat it as a side constraint. Viciously killing one to prevent five other vicious killings, that sort of thing.
Thanks again for the discussion! I think the issues here are largely terminological, and that there are many interesting joints to carve. You could build into the definition of consequentialism that reasons for preference are "prior to" or "ground" reasons for action, whereas deontologists claim the reverse. That is one way to use these terms. But if we define them that way, we risk ignoring the no-priority view that neither sort of reason explains the other.
ReplyDeleteOn the substance, the bit I mainly disagree with is the "naturalization" test. I think it should make a difference to our preference over outcomes whether e.g. five are saved by lightning that kills one or by an intentional killing. Agency matters. But I don't see how this brings in any "antecedent judgment that the act would be morally wrong." The reasons for preference here lie in the non-moral features of the action that simultaneously gives its agent reason not to perform it and the rest of reason to prefer that it not be performed.