Those put-off by the putative counterexamples to Act Consequentialism may consider Rule Consequentialism a more appealing alternative. Michael Huemer goes so far as to suggest that it is "not a crazy view." In this post, I'll explain why I think Rule Consequentialism is not well-supported -- and, at least as standardly formulated, may even be crazy.
There are three main motivations for Rule Consequentialism (RC). One -- most common amongst non-specialists -- stems from the sense that it would be better (in practice) for people to be guided by generally-reliable rules than to attempt to explicitly calculate expected utilities on a case-by-case basis. But of course this is no reason to prefer RC as a criterion of right; this consideration instead pulls one towards multi-level act utilitarianism (on which the right decision procedure is something other than constant calculation).
A better argument for RC (and the one that seems to motivate Huemer) is that it better systematizes our moral intuitions about cases. But I think this is bad moral methodology -- matching superficial intuitions about cases is much less important than conforming to our deeper understanding of what really matters. And RC is notoriously difficult to reconcile with the idea that promoting well-being (rather than blindly following rules) is what matters.
Perhaps the most principled argument for RC stems from the contractualist ideal of acting on principles that are systematically justifiable to others. Parfit's project in On What Matters was to argue that such contractualist foundations should lead one to Rule Consequentialism. But as I argue in chapter 5 of Parfit's Ethics, it's obscure why we should want the rules we act upon, rather than simply our acts themselves, to be justifiable to others:
[T]he mere fact that the best uniform (or universal) principles recommend an act does not mean that this specific act is any good—the principles’ benefits may stem from other cases. This prompts a couple of deep challenges to Parfit’s rule-based approach: (i) When an optimal act is ruled out by optimal principles, why prioritize the principles—why should acting optimally ever be considered “unjustifiable”? (ii) Different people might do better to be guided by different principles—so, even on a rule- or principle-based approach, why require uniformity?
So I'm dubious of the putative reasons to favour RC in the first place. Moreover, it seems to me that RC is subject to powerful objections.
(1) It's subject to all the standard objections to views that aren't fundamentally consequentialist: (i) it gives bad (rule-fetishizing) answers to the question of what fundamentally matters; (ii) it implies that benevolent spectators should often hope that (fully-informed) agents act wrongly; (iii) it's subject to the paradoxes of deontology, both old and new.
(2) More distinctively, RC (at least as standardly formulated) has absurd implications in any scenario where the optimific rules were good to accept but not good to act upon.
For example, an evil demon could threaten to torture us all unless we come to accept & approve of torturing puppies. (Crucially, the actual act of torturing puppies does not achieve any good whatsoever in this scenario; the belief is enough.) Obviously, one should not torture puppies in this case -- there isn't even the slightest reason to do so.
This is very different from putative counterexamples to act consequentialism, where one might feel that the act "seems wrong", but you can at least see how there are weighty reasons counting in its favour (e.g. saving more lives!). In this case, what we're able to show is that the Rule Consequentialist's assumed link between reasons for accepting a moral code and reasons for acting upon it is fallacious. There's just no essential connection there. But that's the basis for the whole theory.
Could RC be saved by reformulating it in terms of rules that are good just in virtue of the value of the acts that they lead to? I don't recall seeing anyone else formulate the view this way, but it does seem an essential move in order to address this (otherwise decisive) objection. The resulting view starts to look increasingly ad hoc, however -- once you've gone this far, why not simply accept the multi-level act utilitarian view that the rules are mere rules of thumb, rather than in-principle determinants of rightness or normative reasons for actions?
(3) As Podgorski argues, RC is subject to the "distant world" objection, as it "determines what we ought to do by evaluating worlds that differ from ours in more than what is up to us." It seems that this will inevitably lead to clearly bad recommendations in special cases (such as Podgorski's "duds").
(Caleb Perl claims to "solve" this by jettisoning counterfactual evaluation in favour of the "consilience" principle that "the moral value of a rule R is everything actual that’s agent-neutrally good or bad to the extent it’s caused by actions that R classifies as morally right." But such a blinkered form of evaluation will surely be subject to even more egregious counterexamples. E.g. suppose that R permits both good and extremely bad acts, but we're in a world where people have only performed the good acts. We shouldn't conclude from this that R is a good rule, or that its non-actual (extremely bad!) instances are permissible.)
(4) RC is a structural mess. As I explain in my (2012) 'Fittingness' paper:
Rule consequentialists first identify the rules that are best in terms of impartial welfare (or what’s antecedently desirable), and then specify that we have decisive reasons to act in accordance with these rules. Finally, they might add, we have overriding reasons to desire that we so act. This way, a prohibited act may be ‘best’ according to the antecedent (agent-neutral welfarist) reasons for desire, and yet be bad (undesirable) all things considered. This avoids the incoherence [of preferring to act wrongly]. But it also brings out how convoluted the view really is. It is recognizably consequentialist in the sense that it takes (some) reasons for desire as fundamental, and subsequently derives an account of reasons for action. But then it goes back and “fills in” further reasons for desire — trumping the original axiology — to make sure that they fit the account of right action. In this sense it exhibits a deontological streak: reasons for action are at least partly prior to reasons for desire. In other words, the initial axiology includes only some values (the ‘non-moral’, agent-neutral welfarist ones), and what’s right serves to determine the remaining (‘post-moral’, all things considered) good.
I don't have a further argument against accepting a moral theory with this structure. It's not strictly incoherent or anything. I just think it's unappealing once brought to light, especially when the view lacks significant compensating advantages. (I think this also brings out why we might reasonably regard RC as not really consequentialist, despite its name.)
While I'm not a consequentialist of any sort, I am extremely skeptical of all of these arguments against rule consequentialism, with the exception of the distant worlds objection (which, however, I think shows that a rule consequentialist should avoid a particular view of moral rule evaluation, not the falsehood of rule consequentialism itself). This is true even of the argument against the popular objection to rule consequentialism; it *does* give a reason to prefer RC, namely, simplicity, one sign of which is that people regularly find it easier to think in terms of RC, with its close association of criterion and decision procedure, than in terms of multi-level act consequentialism. This is, to be sure, not a decisive reason, but since you later provide the convolutedness objection to RC, I think it's fair for the RCist to point out that there seems to be some reason to think that people in fact tend to find RC less convoluted and difficult to navigate than MLAC.
ReplyDeleteRC *is* fundamentally consequentialist. All RCists hold that moral rules we actually are justified in using can change over time (e.g., if situations change, if technologies change, if we discover something about the relevant consequences that weren't known before), on the ground of whichever consequences they take to be relevant to moral life, and therefore it doesn't make much sense to say that they are rule-fetishizing. The hope objection runs into the problem that RCists usually take rule-following to be one of the contributors to the best outcome (to take just one example, the best outcome for human beings will always involve having the best society, and societies are partly constituted by enforcement of rules and codes). Perhaps there's some version of it that could still be run, but it would have to be one that makes sense of the best outcome not being one in which people have moral codes that guide them in acting with reasonable regularity and preditability for the general and overall improvement of outcomes.
(2) seems to require that we simply split systems of rules into two: one for acceptance/approval and for acting. This is certainly an inconvenience, but it's unclear why it is supposed to be absurd. It is not unheard of, actually, in the history of ethics (many early modern versions of ethics distinguish between agent-focused and spectator-focused rules this way, and it has the advantage that it makes ethics and aesthetics parallel, since standards of artistic production and standards of taste are generally recognized to be distinct and in unusual situations can come apart). And even if one holds that there is no essential connection, I'm not sure why the RCist can't simply say that nonetheless, there is an empirical one. After all, evil demon scenarios are among other things designed to get around what in fact we take our actual evidence to be. Yes, it would be nice to have a necessary connection (if we ever meet any aliens or evil demons, for instance), but sometimes we only have a factual one, and we can usually do just fine with that.
I have to head off to virtual class, but on the distant worlds objection, my own view is that it really shows just that RCists should be (like Mill) positivists about obligation, seeing obligations and moral codes as mechanisms we design for a better world rather than as things given in the nature of reality.
Hi Brandon!
DeleteI should clarify that I'm not concerned about what folk find "convoluted" or "difficult to navigate" -- I don't think subjective reactions are relevant to assessing the truth of a moral theory. What's more relevant, I think, is the objective structure of the theory, and RC is convoluted here (in its interplay of reasons for action and desire) in a way that MLAC simply isn't.
> "moral rules we actually are justified in using can change over time... and therefore it doesn't make much sense to say that they are rule-fetishizing."
Well, not on that basis, but what about the fact that it tells you to abide by a rule even when you know that following the rule in your circumstances would be counterproductive? That seems to indicate that their rules have been imbued with excessive or undue moral significance.
> "[The objection] would have to be one that makes sense of the best outcome not being one in which people have moral codes that guide them in acting with reasonable regularity and predictability for the general and overall improvement of outcomes."
Not at all. MLAC endorses being guided by moral codes (understood as rules of thumb) insofar as this is a good thing. But (i) it allows for more exceptions and individual variation in appropriate rules than does RC, and (ii) in cases where the objectively best exceptions are unknowable, it holds that the objective moral reasons track the value facts (rather than pretending that the rules matter objectively), while agreeing that it's most rational (or supported by "subjective"/evidence-relative moral reasons) to continue following the generally-reliable moral rules.
"Well, not on that basis, but what about the fact that it tells you to abide by a rule even when you know that following the rule in your circumstances would be counterproductive?"
DeleteThis seems ambiguous about what "your circumstances" are. Are they this particular situation, or is it the greater context in which people following these rules is beneficial, for which your own particular situation may serve as example, influence, solidary support, etc.? Rule consequentialists are *conservative* about rules, just by the structure of the approach, but this is because they don't see right and wrong as being about getting the best result in this or that particular circumstance by this or that particular action, but about us all together getting better results overall by he *kinds* of things we do, which we capture in rules. The consequences still dominate the consideration. But rule consequentialists are building on a bigger scale than act consequentialists are, and therefore they are more tolerant of particular non-optimalities. Think of it by analogy. You don't custom-build one-use machines for every single thing you need done. When you need a computing job done, you don't build the computer and program it from scratch every single time. If you did that, and you were a genius engineer, then you'd get computing hardware and software that was absolutely optimal for each particular use. But you don't build a computing culture that way -- in fact, that guarantees you'll never a computing culture, because you can't build a culture on the assumption that everyone is a genius engineer able to start from scratch. You do it with general-purpose machines that are not going to be optimal for particular cases, but are going to be good enough for a lot that they are optimal for the general usefulness of computers. A rule consequentialist has some tolerance for nonoptimalities in particulars in order to get a better result overall. So far that's a common thing among consequentialists, but one thing that's distinctive of rule consequentialists is that they think that moral right and wrong are not at the level of the particulars, where not every kind of badness with respect to badness is moral badness, but are at the level of the specification of the overall system to which this or that particular action is just a part.
This is why your MLAC response doesn't seem to work. MLAC is not the hope objection; it's just an alternative theory of why we use rules. The hope objection seems to assume that the rule consequentialist is already wrong about the level at which morally relevant hope occurs -- that the best overall outcome at which we are considering matters is this particular circumstance, not the wider society of which it is a part.
I suppose I think of 'convolutedness' as a difficult of navigation, in which case the best evidence of whether something is convoluted is how difficult people actually find it to navigate -- which would certainly be at least a disadvantage in a practical field like ethis. I guess I'm not sure what supposed badness of the structure you are trying to point out if we are setting aside whether it's hard for a typical reasonable person to navigate it in practice. Perhaps I'll have to re-read your paper more closely on this point.
I suppose another way to state this, which just occurred to me after I clicked 'publish' is that for RC, right and wrong are entirely about the way we work together for overall good, which we do by shared standards, through which we cohere in common cause. Looking at the meticulous details will indeed show some cases in which we doing what is right (by the shared standard of our common project) will not get the best results *in that case*, but the whole point of the shared standards is that it is in fact the *right kind of working together with everybody else* that gets the best results overall, and therefore getting our joint moral project right outweighs any nonoptimalities that arise in particular cases by accident or freak circumstances. (If the nonoptimalities are not accidental or rare chance events, they can get taken into account in an improved ruleset, in a future upgrade of our moral project.) What matters in the Jack case, for instance, is our coming together in a shared stand against murder, by which we make our society one the rejects murder, which is a better outcome than murdering some to save others. The RC view is that the best outcome is one in which we follow, implement, and enforce a rule against murder (although, as we grow more enlightened, perhaps one more advanced and sophisticated than the one we have now), and all that follows from that, not any outcome that you can determine just by looking at the outcomes of this particular case. The worst possible outcome is always to lose our rule-structured shared moral project.
Delete"The worst possible outcome is always to lose our rule-structured shared moral project."
DeleteIf that's true, then AC will never recommend acts that would risk that result. Still, there seems logical space for acts that have no such long-term risk even while going against the generally best rules. So the question is how we should assess such acts. From a consequentialist perspective, it seems clear that we should assess such acts positively (just as we should assess the generally best rules positively). We should want people to internalize the generally best rules, and then we should want them to act contrarily on just those occasions when it would be best for them to do so.
I mean, really, we should want each person to internalize whatever set of rules it would be best to have that individual internalize. There's no logical guarantee that uniformity would be socially optimal (though there are obviously cases where co-ordination is important, e.g. road rules).
DeleteWell, there are two very different issues here. One is whether the hope objection is a good argument against RC -- it is not, because it requires assuming as a 'best outcome' what is not the best outcome on RC, and which is not what RC would think you should primarily hope for. The other, which, reading over your responses, seems to be what you are primarily concerned with, is whether RC or some form of AC like MLAC is a better way to handle things like moral rules. That is a distinct issue.
DeleteOne of the reasons I think I'm so very skeptical of the objections under (1) especially is that there seems to be an assumption that RCists are really deontologists who are faking being consequentialist. This is not true; they are fully consequentialist, and if AC has any responses to the objections under (1) on consequentialist grounds, RC can generally make the same responses with little to no modification -- it will emphasize different things, but as a consequentialist approach, it can use the full panoply of consequentialist tools. Where RC and AC differ is not about anything in the abstract about consequences are their fundamentality but about what right and wrong are. For a typical RCist, right and wrong derive from consequentialist principles not when we are considering best outcomes at individual levels but only when we are considering best outcomes at population levels. This, I think, becomes really clear when we consider your response:
> "I mean, really, we should want each person to internalize whatever set of rules it would be best to have that individual internalize. There's no logical guarantee that uniformity would be socially optimal (though there are obviously cases where co-ordination is important, e.g. road rules)."
RCists don't hold that we should be uniform, since the typical RCist holds that there kinds of goodness and badness other than moral goodness and badness that should be determined by improvising personal tastes, etc. The existence of moral codes obviously does not imply any uniformity; moral rules are not precise enough to dictate details, but only give frameworks. They hold that the best possible outcome for the largest population is obviously something that requires a considerable amount of coordination, and 'right', 'wrong', 'obligation', 'duty', etc. are terms we use to describe the coordination *specifically*, not every aspect of getting the best outcomes. Thus for every form of RC, morality *just is* like road rules, with the primary variations being in how they think our moral rules progress (by aiming at an ideal we can already rough out or by testing and tweaking locally), and the best moral rules of the individual to internalize are those like 'Don't murder' that are capable of coordinating actions in a beneficial way on a large scale, because that's what a moral rule is, not some personal set of rules you've decided for yourself (even if they are very good rules).
"'right', 'wrong', 'obligation', 'duty', etc. are terms we use to describe the coordination *specifically*, not every aspect of getting the best outcomes."
DeleteThis meshes with my sense that RC is not really a substantive normative theory, competing with AC in accounting for our normative reasons for action. Perhaps, as you say, RC is just giving an account of certain words that are more limited in scope ('right' etc. when defined as relating to public codes for coordination). AC obviously isn't talking about that; it's addressing the more fundamental normative question of what we really ought to do, all things considered, not just so far as public codes are concerned.
The substantive question is whether we always really ought to do what the best public code would direct us to do, and I take it to be completely obvious that the answer to this is 'no'. RC can introduce terms for talking more narrowly about ideal public codes, and these will often be of some normative interest (insofar as we very often have good reason to abide by such codes), but they aren't really giving an account of what one ought (all things considered) to do.
"considering best outcomes at individual levels but only when we are considering best outcomes at population levels"
The ranking of outcomes isn't level-relative, so this doesn't make any sense. Consequentialists evaluate entire possible worlds, and direct us to prefer better worlds over worse ones. AC specifically directs us to act so as to bring about preferable worlds. Contrary to the occasional caricatured misunderstandings of its critics, it most certainly does not direct us to ignore the "population-level" consequences of our actions. All of that is fully taken into account.
Huemer's version is the following and seems to run into problems "Consider the sets of rules that could realistically be the socially accepted morality of a human society. Among those sets of rules, select the one that would have the best consequences if it were the socially accepted morality. Then act according to those rules."
ReplyDelete1 Suppose that we're considering developing AI. If I develop AI there's a 2% chance of the world ending. If someone else does first there's a 30% chance. Assume additionally that AI has no benefits. In this case a good social rule would be not developing AI. However, it's still good for me to develop AI.
2 How we define the community is unclear. If it includes everyone, that runs into the egyptology objection.
3 Consider a case where you know with metaphysical certainty and are justified in believing that killing one person would save 558 people. Most people who believe that are mislead and so not killing people even to save more would be a good societal rule. However, it would still be better to kill one to save 558 as Huemer agrees.
Yes, those are nice ways of fleshing out the "uniformity" objection!
Delete