[This is something of a manifesto for my current research project...]
Critics of consequentialism often object to how a consequentialist agent would (allegedly) think. They claim that the consequentialist agent is, in some sense, a bad character. Defenders of consequentialism typically dismiss such objections by citing the distinction between 'criteria of rightness' and 'decision procedures'. (Utility provides the criterion that determines the moral status of an act; it's a further question whether agents ought to attempt to calculate utilities themselves.) This is not entirely satisfactory. There remains a real objection here that needs to be addressed, not just dismissed. As I will explain, consequentialists still need to say something about what a 'rational' or fitting moral (consequentialist) agent would look like -- and when they do, this leaves room for others to object that the agent thus pictured is not in any sense morally 'rational' or non-instrumentally ideal.
To begin, we must distinguish two very different kinds of normative evaluation: the 'fortunate', and the 'fitting'. On the one hand, we can ask whether an agent's psychology is recommended by the normative theory as something to aim at -- roughly, whether it is desirable, or ought to be pursued, or such like. This is to ask whether it is a good or fortunate psychology to have. On the other hand, we can ask whether the agent's psychology embodies or "fits with" the normative theory -- roughly, whether the agent is responsive to the reasons posited by the theory: whether he desires what the theory says is desirable, etc. This is to ask whether the agent is, in a sense, rational or (as I will say) fit.
This distinction is illustrated in cases of 'rational irrationality', where the best disposition to have is one that embodies irrationality. Parfit's threat ignorer (for example) is irrational, but his psychology is rationally recommended or fortunate, since by being intrinsically defective in this way the agent is more likely to attain rational goods (he will no longer be vulnerable to threats or blackmail).
Similarly, a hedonist might say that only desires for pleasure are fitting, but it's fortunate -- better achieves the goal of pleasure -- to have other desires besides. Hedonists will think there's some sense in which an agent with other desires is rationally defective: they're desiring things which don't really warrant desire, after all. But, they'll say, it's fortunate to be defective in this way.
Finally, in case of ethics, we should likewise distinguish 'morally fortunate' from 'morally fitting' character. The fortunate character is that which serves to promote the good. The fitting character is that which embodies an orientation towards the good. This is the sense in which someone might have "good intentions", even if the intention has bad consequences, and so is unfortunate. Talk of "virtuous" character also plausibly concerns the 'fitting' mode of evaluation.
To aid your intuitive grip of the distinction, we can identify two families of evaluative terms. In the first family, we find terms like 'desirable', 'fortunate', 'good [on net]', and their opposites. These mark a kind of evaluation that is at least partly instrumental. In the second family, we find terms like 'rational', competent, virtuous/vicious, fitting/perverse, well-meaning, 'well-functioning'/'defective', and perhaps 'intrinsically good'. I should emphasize that, while these terms mark a kind of intrinsic evaluation (whether an agent is rational, virtuous, etc., does not depend on the outside world), one needn't think that there is any value to being in this fitting state. That's a substantive axiological question.
We're now in position to distinguish two anti-consequentialist objections. One claims that the fitting consequentialist psychology is unfortunate or 'self-defeating'. This is a very poor objection, as I explain in my old post, 'What's wrong with self-effacing moral theories?'
But when deontologists complain about the bad character of a committed consequentialist agent, there is something else that they might mean. They might mean that the fitting consequentialist psychology is (contrary to the consequentialist's claims) not actually morally fitting. For example, they argue that the "ideally rational/virtuous" consequentialist agent is incapable of friendship or commitment to projects -- but, they add, this seems like an intrinsic defect: surely genuine virtue and rationality are not incompatible with these important goods. So, they conclude, the consequentialist's conception of rationality (virtue, fittingness) must be in error.
This objection is the real challenge. Consequentialists have typically neglected it, because they have focused exclusively on evaluations of fortunateness. They haven't appreciated that their theory also commits them to a conception of the morally fitting agent. To take up this challenge, we must either (i) bite the bullet and insist that what the deontologist identifies as moral 'defects' are not really so, or else (ii) argue that, properly understood, the fitting consequentialist agent would not in fact possess the identified defect. (See, for example, my response to Stocker: 'Satisficing and Salience'.)
Of course, the first step towards a solution is recognizing that you have a problem.
[One might understand the developers of 'indirect' or 'sophisticated' consequentialism as working in this vein. But they have not always been clear about whether their theory commends the 'indirect' decision procedure as fitting or merely fortunate. Hence my previous post exploring the relation between sophisticated consequentialism and 'rational irrationality'.]
Richard,
ReplyDeleteCould you say something more about what fittingness amounts to? Act-utilitarianism tells me to sometimes commit murder. It also tells me to develop a disposition to never murder (lets assume). What is it for me to fit this theory?
If I fit it so long as I'm responsive to some of its requirements, then I fit it either if I murder in the correct circumstances, or if I never murder. So I pick the latter, easier task. I fit the theory.
On the other hand, perhaps I only fit the theory if I'm responsive to all of its requirements. But now it seems impossible to fit the theory: I can't be responsive both to the requirement to have a disposition to never murder and also the requirement to sometimes murder. (Unless we're reading "responsive" in some weak sense.)
I suppose I'm worried that the former sets the bar too low, the latter sets it too high, and I'm not really sure where a non-arbitrary point might be found in between. So perhaps if you could say a little bit more about this?
Hi Alex,
ReplyDeleteAs a first approximation, one "fits" (or is rational by the lights of) act utilitarianism insofar as one is responsive to considerations of utility, and hence disposed to perform utility-maximizing acts. One of those acts might be to change one's dispositions, so that one no longer fits the theory (but is instead strictly disposed against murder). This is structurally similar to cases of 'rational irrationality'. One is rational (fitting) in acquiring this new disposition, but not in possessing it, since one is no longer sensitive to the reasons posited by the theory.
Sorry Richard, I can't have been clear. I agree that in this case, one may well be fit in some respects but not others. The question is when that amounts to an objection to a theory. Is a theory objectionable because it implies that one can't be fit in all ways at once? Or only when it implies that one can't be fit in any way? As I say, the former looks too demanding, and the latter looks too weak.
ReplyDelete(Oh, and perhaps this is just me - though I doubt it - but this comment box is weird. I can't move the text cursor around except by mouse, and nor can I cut, copy or paste in it.)
ReplyDeleteHi Alex, I do not think that this is a case of being "fit in some respects but not others."
ReplyDeleteI meant to suggest (at least as a first approximation) that one is wholly fit (at time t) when one is (at time t) disposed to maximize expected utility. Now, if it would maximize expected utility at t1 to act so as to bring about unfit dispositions at t2, then the agent will be (wholly) fit at t1 and no longer fit at t2. (Or, if they fail to act in this required way at t1, then they are not really fit then.) This is the analogue of Parfit's observation that circumstances may render it impossible to be rational at all times, since to be rational at one time we are obliged to ensure our later irrationality. But at least for each particular time, there are no conflicts in what is rational or fit.
So, I should emphasize that fitness (like rationality) must be evaluated at a time. It is not a global property of agents.
Thanks Richard, that's helpful.
ReplyDeleteHi Richard, you wrote:
ReplyDeleteSo, I should emphasize that fitness (like rationality) must be evaluated at a time. It is not a global property of agents
This seems false. At least conceptually, when we talk about virtues and about agents, the fitness (in other theories) is a global property. To specify that in act consequentialism, fitness is not global but time specific is at the very least strange. An argument could be made that fitness is in itself a global property and is not amenable to being used in a time specific sense.
Hi Murali, I don't see how it can be a global property (in the sense of not changing over time). Suppose an evil demon will torture people unless you take a pill that will cause you to become malicious. In this case, taking the pill is clearly the benevolent thing to do. So if you're benevolent now, you'll be malicious later. Characters change, hence so must character evaluations.
ReplyDeleteCharacters change, hence so must character evaluations.
ReplyDeleteFair enough, but does that help you? i.e. it is still the case that your virtuous agent at some point ceases to be sensitive to the right kind of reasons in order to obtain more fortunate dispositions.
This may in fact have interesting conclusions if I specify the situation slightly differently (say in ideal agent situations and prisoner's dilemmas) The argument flows along the lines of: if an ideal agent (one who is properly sensitive to reasons) has particular dispositions, it must be the case that those dispositions are fitting. Then, it is simply a matter of speculative theorising as to what actually provides reasons for said disposition.
I will actually have to spell out what I mean better, whi I will do in an upcoming post in my Blog.
I'm a bit puzzled as to how this is a problem for Cs. My instinct is to say that the GC response is: "fittingness just isn't that important." You say "they haven't appreciated that their theory also commits them to a conception of the morally fitting agent". But this strikes me as either trivial or wrong: while we/they are committed to a given preference relation for any two states of the world, which can be anthropomorphized to an agent as in economics, there's nothing about this 'agent' that is 'ideal' in any substantive sense; the consequentialist wants there to be 'fortunate' rather than 'fit' people.
ReplyDeleteThis is related to what I was trying to say in the last comment I left, about why I emphasize the G in GC: when you insist that the moral primitive is the preference relation over states of the world rather than any particular application *of* that relation, there's no need to be bent out of shape by the fact that a certain *globally undesirable* application of that preference relation is, in fact, globally undesirable.
Hi X, you're missing the point. The objection is not just that it's bad or undesirable to be a rational consequentialist agent. That is, as I agree in the main post, a "very poor objection".
ReplyDeleteThe more serious objection is that consequentialism entails claims about rationality that just aren't true. Now it's no response to this to say that rationality "isn't that important". Whether it's important or not, if your theory implies something (apparently) false, then that's a(n apparent) problem.
I do understand that you're trying to mount a separate critique, but I'm having trouble seeing what's there. Reading your post once again, it feels the consequentialist can simply deny what you're attributing to him at the point where it begins to clash with a competing claim.
ReplyDeleteThe dilemma is supposed to be here: 'They might mean that the fitting consequentialist psychology is (contrary to the consequentialist's claims) not actually morally fitting. For example, they argue that the "ideally rational/virtuous" consequentialist agent is incapable of friendship or commitment to projects -- but, they add, this seems like an intrinsic defect: surely genuine virtue and rationality are not incompatible with these important goods. So, they conclude, the consequentialist's conception of rationality (virtue, fittingness) must be in error.' But I would deny that an agent whose psychology embodies a choice-situations-to-actions mapping equivalent to that which C's think determines actions' rightness is "ideally rational/virtuous." I don't think that's an inescapable judgment for the C to make--not at all.
"Rational" is of course an incredibly slippery word, and of course it's hard to iron out all the ambiguities in a blog post or comment, but I just don't see why you're applying it to an agent's psychological state in the way that you are. Here's an alternative: Mr. C has an axiology that basically ranks states, and his consequentialism arises from his insistence that the entire deontic realm--talk of rightness, recommendation, requirement, all that stuff--must derive from the axiology. This derivation is *pragmatic* and *contextual* (Alaistair Norcross is very good on this) insofar as the relevant alternatives (and hence relevant differences in consequences) are going to vary depending on what the deontic judgment in question is *for*. We can sensibly ask about the right disposition for me to *start trying to cultivate right now*, for me to *have had cultivated in my as a child*, for others to *judge me against* vis-a-vis criminal responsibility, &c. So long as we are in fact consistent about applying our axiology to the comparison we've decided to make, we needn't insist that only a certain sort of right answer--that of an agent with full information and no cognitive limitations facing a situation whose outcomes are unaffected by the very process of his decision-making--is the "ideal" of rationality.
What I'm saying is that once you move from abstract mappings (from a set of state-paths to a smaller, preferred set, &c) to "character" or even momentary brain-states, you're already in the realm where Mr. C insists that *only* talk of "fortunate" makes sense.
That is, you can certainly say that only the act-oriented psychology is "fitting," but Mr C just doesn't see that sort of "fittingness" as being any more fundamental than, say, the "fittingness" of having the right psychology for achieving goodness while operating under severe cognitive disabilities--at least if both are detached from whether or not the agent *has* the capabilities to make each "fortunate". *Neither* is "genuine virtue/rationality" in any sense that would carry weight--in any sense that would contradict our firm intuitions that genuinely rational people can still be partial to friends & lovers, etc. So there's no commitment to a false claim; Mr. C just thinks that the "genuinely rational/virtues" is in fact picking out a different sort of psychology.
I get that your views are connected to your views on rationality in general, and I'm unlikely to persuade you--but then, your project is the one trying to convince. =)
I understand that you see "rational" as "responsive to the actual reasons", and one can frame that as a consequentialist regulative ideal, too, but I don't see how that gets you the problematic "ideally rational character." Why is it ideally rational to be "perfectly" oriented to reasons you aren't able to, in fact, reliably perceive and act upon? That seems silly, not rational; indeed, it means ignoring valid reasons for acknowledging one's limitations. If the point is that in moments of searching self-reflection--while meditating in front of one's shrine to Sidgwick, say--one is unable to be a good friend/lover, this is true but trivial, since it holds for every reasonably absorbing mental state.
ReplyDeleteThe distinction you're making seems rather like the one Pettit makes between promoting and embodying value in his short "Consequentialism" piece. But being a consequentialist just means rejecting the claim that "embodying" value--or having "fitting" psychology--is an autonomous sphere of evalution, period.
Hmm, Pettit contrasts promoting and honouring value, but that's different from what I have in mind here.
ReplyDeleteI don't see any reason to think that "being a consequentialist just means rejecting" the 'fitting' mode of evaluation. (That would certainly be a surprise to readers of Parfit's Reasons and Persons.) It merely means that given a choice between being fitting or fortunate, we should prefer the latter.
Again, as Parfit's discussion of 'rational irrationality' shows, we can make perfect sense of the idea that it might be unfortunate to be rationally fitting. These are two different modes of evaluation, and they might come apart. I don't really see how one can deny this. One might prefer to focus on evaluations of fortunateness, or insist that it is more important for various purposes (I would agree that it is more important for many, but not all, theoretical purposes). But it's obviously possible that in some circumstances the most fortunate psychology to have is one that is uncontroversially crazy or vicious. These latter terms just aren't in the business of assessing fortunateness. So any complete moral theory needs the conceptual resources to talk about more than just that.
N.B. I'm not here making any firm claims about what is fitting according to consequentialism. I offered a "first approximation" to Alex for purposes of illustration, but I agree with you that a consequentialist needn't be wedded to anything quite so crude. Insofar as you argue against this by highlighting "reasons for acknowledging one's limitations", etc., then you are in fact engaging in my project of developing a more appropriate account of consequentialist rationality. That's great -- that's exactly what I'm saying we need to do.
Maybe we're far apart here, maybe not. I'm not quite sure. I think my position--Norcross mixed with Gibbard, maybe--is more revisionist than Parfit's vis-a-vis rationality, insofar as I'm not willing to sign on--at least without caveats--to: "But it's obviously possible that in some circumstances the most fortunate psychology to have is one that is uncontroversially crazy or vicious. These latter terms just aren't in the business of assessing fortunateness."
ReplyDeleteI think that actually, cashing out viciousness & craziness in terms of 'fortunate' is actually more fruitful than doing so in terms of 'fit', precisely because the former allows you to condition on all sorts of things that will vary across time, place, culture, etc. 'Crazy' is a good example of this--not only can the consequentialist say his 'fortune'-grounded concept of crazy is *more useful* in terms of carving out a concept that does helpful work in assessing perceived defects in rationality relative to some (contingent) standard, it's not even all *that* revisionist insofar as it also does a better job of explaining its historical use (see, Foucault & everything else written on the history of 'mental illness').
So: even if some don't want to go there, I think consequentialists can and should deflate 'rational' as just another deontic predicate, one that, much like right, ought, may, &c., is going to be heavily *context-sensitive*. What's special about it is that its core uses are focused on reasoning as an activity and reasons as objects of that activity, but just as with 'right', there's no ideal of 'rational' that applies to someone's psychology, without being conditioned on other things.
(And I know Pettit's distinction isn't precisely yours, but I really do think in parallels it--and I think his reasons for rejecting 'honouring' are similar to mine for rejecting 'fit' as an independent normative sphere.)