Most everyone agrees that you should break the rules if that's the only way to avoid disaster. But it seems intuitively objectionable that Act Consequentialism tells us to (say) break a promise whenever doing so would be even the slightest bit better than keeping it. Well, maybe. I agree that there's something troubling about the agent who breaks a promise the moment it seems like there's something (marginally) better he could do. But such an agent is not, I will argue, what a competent act consequentialist would look like.
As I argued in 'Defective Deliberateness', competent agents can't be constantly deliberating. In addition, we must recall that overt calculation often goes awry. So the competent Act Consequentialist largely relies on rationality-enhancing dispositions and rules of thumb in his everyday life, only pausing to reflect when his well-calibrated sub-personal mechanisms alert him to the need (say due to complex novel circumstances, that his "auto-pilot" wasn't designed to deal with). Everyday promise-keeping is not exactly novel, so for the competent agent the question whether to keep the promise shouldn't even arise. It's a no-brainer.
(This is not necessarily because it's always clear that keeping promises is objectively for the best. But it typically is for the best, and on the odd occasion where it isn't, this almost certainly won't be clear. In that case, the possible benefit from breaking the rule is so marginal that it generally won't be worth the cognitive costs of attempting to assess the precise balance of reasons.)
But suppose the agent comes to consider the question anyhow. What should he conclude? We can stipulate that in fact the outcome would be marginally better if he broke his promise, but does the agent himself have any way of knowing this? Not easily, at least. (Among other things, he'd need to first consider the possibility of self-serving bias corrupting his judgment, and weigh the apparent benefits of rule-breaking against the long-run value of retaining a reputation for trustworthiness.) Maybe if he heard the booming voice of God reassuring him of this fact, then he could rationally go ahead and break his promise without further worry. But in ordinary circumstances -- as we're supposed to be concerned with here -- it's simply never going to be clear when rule-breaking is marginally beneficial. So the agent is faced with an immediate choice: he can (i) break the rule even though it's unclear to him whether any good would come of this; (ii) sink further cognitive resources into investigating a question that he probably shouldn't have bothered to ask in the first place; or (iii) simply keep his promise and turn his attention to more important matters. It seems pretty clear that, in this sort of case, option (iii) is the way to go.
In sum: breaking a rule will only be clearly worthwhile in cases where it is also of significant benefit (in which case we all approve of rule-breaking anyway). If it's only of marginal benefit, this fact typically won't be clear enough for a rationally self-doubting agent to confidently act on it. And the low potential payoff means that it isn't really worth inquiring further: better just to stick with the generally-reliable rule of thumb. So a rational act consequentialist generally won't be found engaging in marginally beneficial rule-breaking after all. They'd even share our intuition that there's something awfully dubious about any agent who would act that way.
This seems to me to defang the original objection. What do you think?
Would a well-programed act-consequentialist break a rule where the benefit would be marginal? Probably not. Does this show act-consequentialism isn't so counter-intuitive? Act-consequentialism, as a theory about which acts are morally required, holds that the morally required act is to break the rule whenever the net increase in good from doing so is massive or even merely marginal. Talking about the decision procedures of act-consequentialist agents changes the subject.
ReplyDeleteThe view that act-consequentialists would virtually never have sufficient evidence for the net advantage of breaking common moral rules was espoused by G. E. Moore. He argued for strict rule-following by act-consequentialist agents.
I think this is quite satisfactory; sorry I have apparently not done a good job of keeping track of your views (since in a comment on a recent thread, I said something like this, but said it in such a way as to imply that I was disagreeing with you, when evidently I wasn't).
ReplyDeleteHi Brad, I'm not so inclined to formulate Act Consequentialism in terms of the folk concept of 'permissibility'. That concept strikes me (and, I suspect, many other consequentialists) as somewhat vague, and certainly not fundamental. We would sooner use the notion of 'ought', or of 'decisive normative reasons', to formulate our views; and we might further note that these core normative concepts come in more or less objective senses.
ReplyDeleteSo let's first consider the 'objective' or 'fact-based' ought: the sense in which it might be said that one objectively ought to duck if there's a stone flying towards the back of your head, regardless of whether or not you're aware of this fact. (This is obviously a technical term that's not particularly closely linked to the ordinary notion of permissibility!) We can ask: is it counterintuitive to hold that we have most objective reason to break the rules when this would be marginally beneficial on net? That doesn't seem at all counterintuitive to me. (This may be partly because I don't have any independent sense of the 'objective ought' besides that of what's most desirable, or what God would advise us to do. So it may not be a particularly helpful concept.)
There's no objection here in terms of the 'objective ought'. That's why I instead discussed the 'rational ought', or what it would be reasonable for an act consequentialist to do. Far from "changing the subject", it seems to me that this is the only way that the objection can get any real traction in the first place.
P.S. I agree that it would be counterintuitive to identify the ordinary notion of 'permissibility' with the technical notion of 'objective oughts', but that's a separate issue.
Sorry to make a belated reply, but I've been interested in this kind of question recently.
ReplyDeleteYour explanation is unpersuasive to me. We are unsure about a lot of things. Why are promises special? We can simply take the expected value (based on an efficient amount of effort spent calculating)--why impose the additional burden of being sure?
I have an alternate explanation, and it's pretty simple. Promise breaking is rife with external costs. Basically, not only is your own credibility diminished, but the entire institution of trust is diminished. It's very hard to measure exactly how much damage you do to the general ability of people to trust each other by breaking a promise, but just like anything else, you can estimate it. Breaking any promise will likely do surprisingly substantial damage (in comparison to gains). The best comparison I can see is global warming, with lies paralleling greenhouse gases.