These two classes will tend to overlap, but not perfectly, which leaves room for the possibility of "rational irrationality". We can acknowledge that the mere fact that a disposition would be rational to acquire (because desirable all things considered), does not in itself guarantee that acting on the disposition is rational. For there are situations in which it would be (antecedently) rational to bring it about that we are (subsequently) irrational.
While acknowledging this possibility, we may still think that there must be some "transmission" principles according to which the rational status of a general rule can be inherited by the particular acts it prescribes. And, indeed, the distinction I've highlighted suggests an obvious candidate principle: we just need to restrict the rules in question to those that are 'rationality enhancing', i.e. desirable for their (expected) impact on your downstream actions, rather than for extrinsic reasons. [Update: this isn't quite sufficient, as explained here.] Consider:
If S rightly adopts a rule R as (maximally) 'rationality-enhancing' (in my stipulated sense), and R prescribes φ-ing in circumstance C, then when S is in circumstance C, S rationally ought to φ.
This is supported by considerations of meta-coherence. Ex hypothesi, the rule R offers the most reliable guidance available to S -- in particular, it is more reliable than attempting to autonomously determine what the best result would be in each case. (And it is also more reliable than any identifiable alternative, e.g. "following R except in circumstances with the subjectively distinguishing feature F.") So, any given departure from R can be expected to have worse results than would be obtained by following R. So in any given case, the agent should follow R's advice.
My main cause for hesitation here comes from cases where you adopt a rule as a hedge against (as it happens, misleading) evidence that you might be biased in your subsequent judgments. So the rule might tell you to disregard certain first-order evidence, because you can't be trusted to evaluate it rationally. But if you actually are capable of evaluating it rationally, then we may think that there's an important sense in which you rationally ought to be guided by the (first order) evidence. Or, even if the higher-order evidence makes some contribution, we may still doubt the radical claim that it completely swamps the first-order evidence in determining what you rationally ought to do. Anyway, this raises a tricky epistemological issue, which I set out in more detail in my old post, 'Personal Bias and Peer Disagreement'.
No comments:
Post a Comment
Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)
Note: only a member of this blog may post a comment.