The title question stands in need of clarification before it can be answered. First, we must specify what sense of ‘ought’ is involved. On various “adverbial” senses of the term, it simply highlights the requirements of some framework or other.[1] You morally ought to do that which is required by morality, prudentially ought to do that which is prudent, and so forth. In this vein, if Φ-ing is a rational requirement, then we might restate this fact by saying that you rationally ought to Φ. But this does not state anything new, over and above the fact that Φ-ing is a rational requirement. So it is not this sense of ‘ought’ that we are interested in. Nor does it seem entirely adequate to ask whether we ought to be rational according to some other framework of requirements, say those of morality, prudence, or etiquette. These might be interesting questions in their own right, but answering them does not necessarily bring us any closer to answering the original question. We are still left wondering whether we ought to respect those requirements.
Not every possible framework of requirements is a source of genuinely normative reasons for action. The mere fact that I am “required” by etiquette or convention to Φ does not guarantee that I ought to Φ, or even that I have any real reason at all to Φ. So a central problem for the philosophy of normativity is to distinguish which requirements have genuine normative force – i.e. which are the ‘reason-giving frameworks’ – and which demands we may rightly ignore. I will use the term ‘ought’, simpliciter, to mean what some call ‘ought all things considered’, that is, a binding normative claim on our actions. This is a fairly blunt term, however, so I will continue to employ the notion of a (pro tanto) reason as something that has genuine normative force, but that might be outweighed by other reasons. A conclusive reason is one that establishes an ‘ought’ fact.
Skeptics about normativity deny that there are any reasons or ‘ought’s in this normative sense. They hold that the most we can say is that an action is required according to the standards of morality, or etiquette, or rationality, and that there is no further sense in which we really ought to perform the action. On this view, it is straightforwardly false that we ought to do anything, and hence that we ought to be rational. I will disregard normative skepticism for the remainder of this essay, and instead assume that we really ought to do some things and not others.
This leaves two main positions, which I will call ‘normative non-cognitivism’ and ‘normative realism’. Normative non-cognitivism is the view that the reason-giving frameworks are those that we personally commit to, or accept as authoritative over ourselves. So, for example, if I accept the authority of prudential demands but not moral ones, then only the former requirements provide me with reasons for action. Given these non-cognitive commitments, if I could advance my self-interest through moral wrongdoing, then that’s what I ought to do. Since most of us accept the requirements of rationality as authoritative, normative non-cognitivism straightforwardly implies that this gives us reason to be rational – though what we ought to do must also take our other commitments into consideration. But even if we did not accept rational requirements for their own sake, they might provide indirect reasons through their tendency to promote other ends that we have committed to, such as moral action or true beliefs. I will return to this possibility later in the essay.
The other view, normative realism, holds that reasons exist and apply to us whether we like it or not. There is nothing in this definition which tells us how to distinguish which frameworks are genuinely normative. But I will not address this problem here. I will simply assume that there are some normative reasons, without concern for the details of what specific type of reason they might be. Although I will be assuming normative realism from here on, many of my arguments will also apply to normative non-cognitivism in cases where the individual has no intrinsic commitment to the framework of rational requirements. In either case, we have assumed that there are some reasons, some things we ought to do, and the question is whether the normativity of rational might fall out of this.
I must now clarify what it means to be ‘rational’. Sometimes people use the term to denote ‘that which is best supported by reasons’. From this it would trivially follow that we ought to be rational. But that is not how the term is intended here. Rather, I will take rationality to be the purely internal matter of having one’s mind in good order, regardless of how this matches up to external facts. As Kolodny describes it, rationality is “about the relation between your attitudes, viewed in abstraction from the reasons for them.”[2]
Some requirements govern static relations between our mental states, prohibiting certain combinations of conflicting attitudes. Such ‘state-requirements’ have wide scope, as conflicts may be resolved by revising either one of the conflicting states. For example, consider the following principle:
(I+): “Rationality requires one to intend to X, if one believes that there is conclusive reason to X.”[3]
You can violate this requirement by simultaneously believing you ought to Φ but intending not to. There are two ways to avoid this internal conflict. You might intend to Φ, or else you might cease to believe that there is conclusive reason to Φ. A similar range of options will be available for meeting any other state requirement.
However, not all requirements of rationality are state requirements. There can also be ‘process requirements’, which govern transitions between mental states. We can see this because not all means to achieving state requirements are equally rational. Consider again the conflict state whereby you believe you ought to Φ, whilst intending not to. Further suppose that ‘all else is equal’, i.e. you have no other Φ-directed attitudes. In response to this conflict, rationality surely requires you to revise your intentions, not your beliefs about what action is best supported by reasons. Rationality requires us to go where our assessment of the evidence takes us, rather than revise our assessments to match the conclusions we’d like to reach. The latter sort of revision amounts to wishful thinking, not reasoning.[4] This leads us to the principle:
(I+NS): “If one believes that one has conclusive reason to X, then rationality requires one to intend to X.”[5]
This process requirement has narrow scope – the requirement attaches to the consequent rather than the whole conditional. We might add a ‘ceteris paribus’ clause to exclude more complicated cases whereby, for example, you have a second-order belief that your ‘belief that you have conclusive reason to X’ lacks sufficient evidence. In fact, Kolodny argues that (I+NS) holds even then, though one is also rationally required to revise beliefs that one judges to be insufficiently supported by the evidence. He suggests that you could be bound by both these ‘local’ rational requirements simultaneously.[6] But nothing of importance rests on this contention. We may simply exclude such cases from our consideration, and hold that if you believe you ought to Φ, and ‘all else is equal’ in the sense that you lack any conflicting beliefs relating to the normative status of Φ-ing, then you are rationally required to Φ.
We are now in a position to prove, by appeal to the ‘bootstrapping argument’, that we do not in general have conclusive reason to be rational.[7] For suppose we always ought to do as rationality requires. This supposition entails the absurd result that many normative beliefs are self-justifying and thus infallible: believing that you ought to Φ would suffice to ensure that you truly ought to (intend to) Φ! For recall that we have already established that one rational requirement is the narrow scope principle (I+NS), at least in normal situations. If you believe that you ought to Φ – that the weight of reasons supports it – then you are rationally required to follow through on your assessment by intending to Φ. But then, supposing that we ought to do as rationality requires, it follows that we in fact ought to intend to Φ, simply in virtue of the prior belief. This absurd consequence must lead us to reject the supposition. Thus it is not the case that we always ought to do as rationality requires.
Perhaps rationality is normative in the weaker sense that we have pro tanto reason to respect rational requirements. This entails the weaker bootstrapping result that if you believe you ought to Φ, this creates a pro tanto reason to intend to Φ.[8] But this result is not so objectionable. In such a situation, even if you ought not to Φ, it does seem that at least one thing can be said in its favour: namely, that by Φ-ing you would be acting in accordance with the requirements of rationality. You would be following your best assessment of what you ought to do. One might deny that this could be a pro tanto reason for Φ-ing on the grounds that an agent’s mental states are strictly irrelevant to what they have reason to do, but such a position simply begs the question against the normativity of rationality.
Kolodny points out that we do not typically treat rational requirements as providing some further reason for action. If you already believe you have conclusive reason to Φ, it would be superfluous for someone to advise you to Φ by citing the further reason that rationality requires it. You already take yourself to have conclusive reason to Φ, and so do not stand in further need of convincing. This insight forms of the core of Kolodny’s Transparency Account:[9] from the first-person perspective our beliefs seem transparent to truth, so what we believe we ought to do – and hence what is rationally required – will appear to us as what we ought to do, simpliciter. So even if we in fact have no reason to be rational, the rationally required action will always seem, from a first-personal perspective, to be the one we have conclusive reason to do. Kolodny thus explains the apparent normativity of rationality as a mere illusion. We need not go so far, however. By allowing the ‘bootstrapping’ of pro tanto reasons, we leave open the possibility that there are genuine reasons to be rational, in addition to the merely apparent normativity provided by the transparency account.
Kolodny further suggests that a reason must be something we can reason from, so the first-personal superfluity of rational requirements rules out their normativity.[10] But some fact may count in favour of an action, or help explain why we ought to do it – and thus be a ‘reason’ in the sense used in this essay – even if it could never be recognized as such within the context of first-personal deliberation.[11] The fact that an action is rationally required could be part of the explanation of why we ought to do it, even if it is not a fact that we could reason from in first-personal deliberative contexts.
Moreover, even if rational requirements do not themselves provide reasons, they might still be normative in the weaker sense that we necessarily have some (other) reason to do what rationality recommends.[12] Violations of state-requirements, at least, involve some sort of internal inconsistency, which guarantees that one has gone wrong in some respect. Such violations will thus necessarily be accompanied by a reason to get out of them – namely, that at least one of the conflicting attitudes must be in error. So we always have some reason to meet rational state-requirements. However, we earlier established that some rational requirements are process requirements. Due to their specificity, such narrow-scope requirements may ‘misfire’, telling us to revise one of the conflicting attitudes when in fact it is the other that is objectively in error. So process requirements, unlike state requirements, are not guaranteed to be accompanied by independent reasons. It remains an open question whether we always have a reason to meet rational process-requirements.
Such a reason might be instrumental to the realization of other ends, or it might be intrinsic, treating rationality as an end in itself. The case for intrinsic reasons might be supported by the idea that rationality is virtue much like courage, the display of which is always admirable in some sense.[13] This most plausibly leads to the idea that we have reason to possess the dispositions constitutive of the rational faculty. Perhaps we ought to be rational in character, regardless of whether we have reason to do as rationality requires in any particular instance.
Such a claim may also be supported on instrumental grounds, since possession of the rational faculty is, plausibly, the most reliable means psychologically available to us for achieving our other goals.[14] Though it would be ideal to possess precisely those dispositions that would guarantee our doing what we ought in every particular case, I assume this is not a genuine psychological possibility for us. General rationality is, we may suppose, the closest approximation available to us. Granting that we ought to be rational, in this general sense, the question remains whether we have reason to abide by rational requirements in any particular case.
This global/local problem is familiar from other philosophical debates, most notably rule utilitarianism. Given that the overall best result will be obtained by following rules R, does this mean that we ought to follow R in each particular case – even those where it turns out to be locally suboptimal? Parfit thinks not, but suggests that so acting would, at worst, constitute “blameless wrongdoing”.[15] Such a view would allow one to deny that we have reason to follow rational requirements even though we have reason to possess the dispositions that would lead us to so act. But this position seems problematic because the only way to obtain the locally optimal result would be to violate the rules that lead to global optimality. Such a breach would have worse consequences overall.[16] So it seems short-sighted to say that we ought to breach the rules in such cases. Forsaking local gain for the sake of global optimality seems not just “blameless”, but also right. If the only way I could Φ, and thus achieve some local goals, would be to lose or weaken the rational dispositions that will see me right on many more future occasions, then surely this counts against Φ-ing. Thus we have reason, derived from the value of preserving rational dispositions, to abide by rational requirements.[17]
So, in sum, ought we to be rational? The quick answer is: ‘Yes in some senses, no in others, though all depending on what assumptions you’re willing to grant.’ It seems plausible that rationality is a kind of virtue – a fact which would provide at least some reason to be rational in character. If we add in the instrumental benefits that the rational faculty typically helps us to realise, this could plausibly support the stronger claim that we ought to be rational, in the global sense. As a character trait, rationality has both intrinsic and instrumental worth. But the local question is more difficult. We have seen that, in light of narrow-scope process requirements, the bootstrapping argument conclusively refutes the general claim that we always ought to do what is rationally required. This result should not be surprising; sometimes what we ought to do is not apparent to our rational faculties. Nevertheless, a slightly weaker claim – that rational requirements provide pro tanto reasons – can more plausibly survive the bootstrapping objection. My positive argument for this claim depends on our views about the transmission of normative warrant. I have suggested some grounds for thinking that local reasons can flow from the global ones granted above, and hence that we have reason to conform to rational requirements. Finally, I note that the complexities of this discussion will only seem relevant for normative realists or uncommitted non-cognitivists. Other cases are much simpler: non-cognitivism implies that a personal commitment to a framework of requirements suffices to give its demands normative force, whereas if the skeptic is correct then the entire discussion is moot.
References
Broome, J. (draft) Reasoning.
Dancy, J. (draft) ‘Reasons and Rationality’ in PHIL 471 Course Reader.
Kolodny, N. (2005) ‘Why Be Rational?’ in PHIL 471 Course Reader.
Parfit, D. (1984) Reasons and Persons. Oxford [Oxfordshire]: Clarendon Press.
[1] Broome, p.20.
[2] Kolodny, p.1, emphasis removed.
[3] Ibid, p.16.
[4] Ibid, p.28.
[5] Ibid, p.25. In what follows I will sometimes leave off the words “intend to”, and instead speak loosely of rationality requiring one to Φ.
[6] Ibid, pp.32-33.
[7] See, e.g., ibid, p.41.
[8] Ibid.
[9] Ibid, p.64.
[10] Ibid, p.52.
[11] An example was offered in my previous essay, ‘Reasons for Belief’: “Suppose that God will reward people who act from selfless motives. This is clearly a reason for them to be selfless. But it is not a reason that they can recognize or act upon, because in doing so they would be acting from self-interest instead. They would no longer qualify for the divine reward, so it would be self-defeating to act upon this reason. In effect, the reason disappears upon being recognized, so it cannot possibly be a reason for which one acts. Nevertheless, it seems clear that, so long as the agent is unaware of it, the divine reward is a reason for them to act selflessly. So internalism is false. Just as there can be unknowable truths, so there can be [counter-deliberative] reasons.”
[12] Broome, p.91, notes this possibility.
[13] Kolodny, pp.49, 59. Dancy, p.16, suggests that displays of rationality are admirable in the sense that onlookers have reason to approve of the agent, rather than that the agent actually had reason to so act. “Our reason for approving is just that, if things had been as [the agent] believed, this would have given him reason to act.” (emphasis added).
[14] Broome, p.104.
[15] Parfit, pp. 35-37.
[16] Otherwise this action would be part of the ‘globally optimal’ solution, contradicting the original description of the case under discussion.
[17] Another way to develop this idea, as suggested by Jack Copeland in discussion of Broome’s seminar, would be to suggest that reliably useful dispositions, such as the rational faculty, provide prima facie reasons for action. The fact that rationality requires us to Φ might justify a defeasible / non-monotonic (and in some sense inductive) inference to the conclusion that we ought to Φ, that further evidence could undermine. After all, if a disposition really will see you right in the majority of cases, then that provides a sort of statistical evidence that it probably will see you right in any particular (randomly chosen) case.
I expect you are right.
ReplyDelete(To digress:
One reason to be rational: It is often deeply satisfying to engage in careful rational thought, and to reach neat and elegant solutions thereby, and to make a statement plainly and without bias or ornament.
One reason not to be rational: Sometimes neat and elegant solutions are so damnably dry, and plainness and impartiality are so fruitless, that one wishes to do irrational things - any irrational thing, provided it achieves nothing but the wetting of dryness and the growing of fruit - just to make things more lively; in the same way that the moral imperative is sometimes so unremmitingly dry, and so plain, and so careful, and so thoroughly reasonable, that one wants to steal an old lady's wallet, or kick a dog, just to make things more interesting. ) ;)
Often being rational involves just choosing where exactly one wants to be irrational.
ReplyDelete