(i) the belief that you would desire to phi if you were more rational
(ii) the desire not to phi [or, more weakly, the absence of any desire to phi]
The agent who holds both these attitudes is irrational by their own lights. So this opens a path from theoretical to practical rationality. If you're rationally required to have belief (i), then it would seem you're also rationally required to have the corresponding desire (or at least not to have the opposite desire).
I owe the above ideas to Michael Smith. Now, Clayton discusses a similar principle:
(1) If a subject judges that she should Φ and it’s not the case that she should refrain from judging that she should Φ, it’s not the case that the subject shouldn’t Φ.
Here's are a couple of variants:
(1a) If S ought to believe that she should Φ, then it's not the case that S shouldn't Φ.
(1b) If S ought to believe that she should Φ, then S should Φ.
The guiding thought here is that a practically rational agent does what she judges she ought to do. So it would place inconsistent requirements on her to prohibit Φ-ing even as we require her to judge she ought to Φ (which will lead to her Φ-ing on pain of irrationality). That would effectively be to require her to be irrational.
What do you think of these principles?
It doesn't seem inconsistent to think both that:
ReplyDelete(A) On the basis of the evidence she has, S ought to believe that she should phi; and that
(B) S nonetheless shouldn't phi, because of information she could not reasonably be aware of.
Perhaps these are simply different senses of should: A relies on a rational should, while B relies on an all-things-considered, or a moral should.
But it doesn't seem to me that this is effectively requiring that S be irrational. It is saying that rationality (in a certain sense) is simply not sufficient to guarantee moral action. Which doesn't seem all that surprising to me at least: sometimes your best just isn't good enough - even if it might be in some sense excusable.
There's an exclusive club for only the most rational people. Here's the door! If I open it, they'll test me to see how rational I am: if I pass the test, they'll welcome me to their awesome party; if I fail, they throw eggs in my hair.
ReplyDeleteI'm probably not rational enough to pass the test, and I know that. So I have no desire to open the door. But of course, if I were more rational, I would desire to open the door, because an awesome party would await me, and I know this counterfactual to.
No incoherence here, as far as I can tell.
Jonathan - that's a nice point. There is a sense in which you're sub-rational by your own lights there, but that's for prior reasons rather than anything to do with that (absence of) desire.
ReplyDeleteSo I should have been more careful in stating the belief in (i). It's not just that you would desire to phi if more rational; but rather, that you would, if more rational, desire that given your actual condition and situation you phi. (I discuss some further complications to this sort of approach, here.)
Conchis - distinguish objective vs. evidence-relative 'shoulds'. This distinction applies to the theoretical and practical realms alike. (There's some objective sense in which I ought to believe every a priori truth, and some more subjective sense in which these are beyond my reach.) Now, evidence-relative epistemic oughts clearly don't imply objective practical oughts. But so long as we're consistent in talking about only objective (or only evidence-relative) oughts throughout, then we can avoid that problem.
I love them!
ReplyDeleteThanks for the link, Richard. The principle I like is inspired by Smith's argument for the practicality requirement on moral judgment, but it's been weakened in an important way. Smith's argument is concerned with moral rightness, and so he faces the objection from those (Railton?) that worry about the amoralist. I'm now assuming that the should is the moral should, only that it's a should. So, if the amoralist judges that he should sip tea rather than do something to save a drowning child, my point is that there's a sense in which he shouldn't judge that he should do otherwise if it's true that the amoralist is doing what the amoralist should do. There's a reading of 'should' on which the 'should's should swing together.
Fwiw, I agree with your response to Conchis' objection. If there's an evidential sense of 'should', the beliefs we should have about what should be done will correspond to truths about what should-e be done. If there are further shoulds that are independent of the evidence, if you shouldn't-ie X, you shouldn't-ie judge that you should X.
Why believe the principle? Smith argues that unless we suffer from a kind of irrationality, we will never violate the principle. I suppose I'd note a few things.
1. If you deny the principle, then these should be open questions:
Q: I know he/I should think it's his/my duty to X, but should he/I X?
Worse, it seems that situations could arise where the correct thing to say is this: you should have appreciated that doing X was a necessary evil, but you shouldn't have done X!
2. If you believe that the principle is false, you believe in obligations that you could only fulfill by being deeply irrational and acting against the judgment you ought to make about what ought to be done. Who could believe in such obligations? Surely not those who think that our obligations must be rationally identifiable as such. So, those who think that there cannot be essentially unknowable obligations or obligations which you can only believe in by being deeply irrational should not think that there's ever the complex obligations there would have to be for my principle to be false.
3. One lesson from the toxin puzzle is this: the reasons that bear on whether to act bear on whether to intend to perform that action. One lesson to take from cognitivists is this: the judgment that you should X, which rationally requires the intention to X, essentially involves the belief that you should X. If the reasons that bear on whether to act bear on whether to intend, they bear on whether to believe you ought to perform the relevant actions. So, if there's conclusive reason against the action, there's conclusive reason against the intention, so there's conclusive reason not to believe you ought. So, given that there are rational requirements linking action to intention and belief to intention, there are no counterexamples to my principle.