“[T]he term ‘belief’ is ambiguous. It can refer to the thing believed, and it can refer to the act or state of believing that thing. Talk of ‘reasons for beliefs’ inherits this ambiguity.” – Alan Musgrave.[1]
“[O]ne cannot settle on an answer to the question whether to believe p without taking oneself to have answered the question whether p is true.” – Nishi Shah.[2]
Can there be reasons for belief that are not reasons for the truth of the thing believed? The negative response has some intuitive appeal, and fits well with our epistemic practices: the way we generally justify believing p is to point to evidence that p is true. Nevertheless, I will argue that there can be non-truth-indicative reasons for belief. Evidentialism can be challenged on two general fronts. Firstly, there can be non-truth-indicative epistemic reasons for belief, that is, reasons which derive from distinctively epistemic norms other than that of truth. Second, there can be non-epistemic or practical reasons for belief, such as when the holding of the belief would be morally or prudentially advantageous.
We typically take truth to be the normative standard against which to assess beliefs. We say “belief aims at truth,” and assess beliefs according to whether they achieve this goal. By this standard, reasons for belief will be indicators of truth. We have reason to believe p when we have reasons for the truth of p. This is the standard picture. But there are other normative frameworks we might adopt instead, or in addition. For example, we might assess a set of beliefs for its internal coherence, rather than applying an external standard such as truth. Despite making no explicit reference to truth, the standard of internal coherence is nevertheless clearly an epistemic standard, rather than a practical one. And, indeed, it is quite a plausible one. I thus propose that we have epistemic reason to believe p if doing so would yield a more coherent belief set. The goal of internal coherence and the practice of good reasoning both have normative force, independently of their relation to the external goal of obtaining true beliefs.
Suppose that you believe that p, and believe that if p then q, and so on this basis you rationally form the belief that q. The two former beliefs provide you with a reason to adopt the latter belief. In doing so, you make your belief-set more coherent. This is epistemically rational. So you have reasons to believe q. But are there any reasons for the truth of q? If asked, you would likely point to the propositions that p, and that if p then q, as being your reasons for taking q to be true. However, let us suppose that your former beliefs are false: p is false, and if p then q is false. So while you took these to be reasons for the truth of q, in fact they are not. You were mistaken about whether there are reasons for the truth of q. There are no such reasons in this case.
One might object that this conclusion is overly hasty. While the false propositions that p and that if p then q cannot be reasons for the truth of q, perhaps there are other reasons that we have overlooked. In particular, since modus ponens is a truth-preserving rule of inference, the conclusion will inherit whatever justification the premises have. Whatever truth-indicative reasons you have for your former beliefs will be transmitted to the conclusion, thus serving as (inconclusive) reasons for the truth of q. That, the evidentialist argues, is why you have some reason to believe q. But let us suppose that your former beliefs were entirely unjustified according to external standards of evidence or truth-indicativeness. We stipulate that there are no such reasons for q to inherit. Nevertheless, you do have a reason to believe that q, simply because of the coherence of this conclusion with your other beliefs that p and that if p then q. The internal standard of coherence provides normative force quite independently of external considerations. Of course, it does not force to you believe q. Perhaps you ought to reject the premises instead. But this is consistent with there being some pro tanto reason to believe q; it may just be that you have even more reason to avoid all three beliefs. It would be implausible to suggest that there are no reasons whatsoever for believing q. The mere fact that doing so would boost the coherence of your belief set is surely a consideration in favour of the belief, even if there are other, more strongly favoured, ways to achieve this goal. Thus we find that the evidentialist is mistaken: it is possible to have a reason for believing q which is not also a reason for the truth of q.
This vindicates Musgrave’s distinction, though he himself employs it in a different fashion. Musgrave wants to accept inductive scepticism – the view that there can be no inductive evidence for the truth of a belief – but hold that our everyday beliefs can be reasonable nonetheless. In particular, he claims that if an evidence-transcending hypothesis H survives our best attempts to falsify it, then “this fact is a good reason to believe H, tentatively and for the time being, though it is not a reason for the hypothesis H itself.”[3]
This claim is not well-supported, however. It is not enough to point to the possibility of non-truth-indicative reasons. Musgrave needs to explain precisely what sort of reason we have to believe H. If it is not a truth-indicative reason, it must be a reason of some other sort. But it isn’t clear what other sorts of reasons there might be in this case. He doesn’t seem to be pointing to pragmatic reasons of any sort. But nor does he appeal to any alternative epistemic norms, such as the ‘internal coherence’ approach I advocated above. Instead, Musgrave appeals to the fact that the hypothesis is not yet known to be false. This is the sort of appeal one would expect from someone who was still working within a truth-focused normative framework. The reason we feel an intuitive pull towards accepting falsification-resistant hypotheses is presumably that we take such resistance to be evidence that the hypothesis is true. If this were not the case, then it is no longer clear why we should care about resistance to falsification. Why believe the hypothesis if we have no reason to think it more likely true than not? It seems doubtful whether Musgrave has pointed to a genuine example of non-truth-indicative reasons for belief.
A more intuitively compelling instance of the distinction is found by appeal to pragmatic reasons. Suppose a demon threatens to torture your family unless you believe (N) that the number of stars is even. Clearly this threat is in no way evidence for the truth of N. But it does seem a very strong reason to believe N, nonetheless! Granted, this is not something you would be capable of doing through willpower alone. But suppose that an angel offered you a magic pill that would instil this belief if swallowed. It is obvious that you ought to swallow the pill. But the pill is purely instrumental to the end of obtaining the belief in N. You would have no reason to take the pill unless you had reason to believe N. So you must have reason to believe N. In this scenario, it appears that the continued wellbeing of your family is a reason for the act of believing, but not a reason for the truth of the thing believed.
It is undeniable that there is a practical reason of some sort relating to believing N in the above scenario. But perhaps we misdescribe it in calling it a reason for the act of believing N. We might instead say it is a reason for the act of getting oneself to believe N. After all, taking the pill cannot be described as an act of belief per se. Rather, it is an act of getting oneself to believe. The belief is a consequence of the intentional action; it is not the action itself. Indeed, there is no ‘act of belief’ occurring here at all. The belief is something that happens to you, rather than something you do.
In this case, belief is treated as a mere state of feeling, like hunger or tiredness, rather than an intentional attitude. Clearly, hunger is not a rational action, or something that stems directly from your agency. Rather, it is something that happens to you. It is also something you can make happen. But it is clear that in doing so your action is not being hungry, but rather, getting yourself to be hungry. It would be nonsensical – a category error – to claim that you have ‘reasons for hunger’. Hunger is a mere state, occurring outside the scope of your agency. While we can act so as to bring such states about, and have reasons for doing so, this action is distinct from the state itself. There can be no normative reasons for the latter.[4]
We are now in a position to prove that the magic pill scenario fails as a counterexample to evidentialism. Practical reasons are reasons for action. I am here using the word ‘action’ in a broad sense, to cover all intentional occurrences that fall within the scope of our agency; intuitively, something that we do. It thus includes deliberative judgments. Now, only actions can be supported by reasons for action. So if X is not an action, then there can be no practical reasons for X. In the described scenario, believing N is not an action, but a mere state. The belief is brought about by acting upon yourself as if you were an alien object. The resulting belief is thus a consequence of your action, not an action itself; something you make happen, not something you do. Thus, there can be no practical reasons for believing N. Contrary to initial appearances, this is not really a case of reasons for belief that are not reasons for the truth of the thing believed. Instead, we have found reasons for getting oneself to believe, as distinct from reasons for the act of believing (since there is no such act here).
Although believing N in the above scenario turned out not to be an action, this is a very rare case. Most often, believing is something we do, not something that happens to us. Beliefs usually develop from our internal mechanisms in a manner which allows them to be attributed to our agency, rather than some external source such as a magic pill. Doxastic deliberation, where one reflects on the evidence and thereby comes to a judgment about what to believe, is the clearest case of such rational influence. So even if the particular believing mentioned above was not an action, and thus not something we could have reasons for, there are acts of believing for which the possibility of practical reasons is still open. Perhaps the agent in the magic pill scenario later reflects on their induced belief N, and makes a judgment about whether to retain the belief. This is a genuine ‘action’ (in my broad sense), and thus potentially open to support from practical reasons. So let us now address the general question whether practical reasons can apply to doxastic deliberation. Is it possible for practical considerations to influence what one ought to conclude when deliberating over whether to believe p?
Shah thinks not. He begins by noting a phenomenon which he calls ‘transparency’: within the first-personal perspective of doxastic deliberation, “the question whether to believe p seems to collapse in to the question whether p is true.”[5] At first glance, the magic pill scenario appears to be a counterexample to this claim. The agent was deliberating whether to believe N (by taking the pill), and paid greater heed to practical concerns than evidential ones. But in fact this was not a case of doxastic deliberation at all. We have already noted that the resulting action was not an act of belief, but an act of getting to believe. As the prior deliberation concerned the decision to thus act, it was deliberation over whether to get oneself to believe N, rather than deliberation over whether to believe N per se. More generally, it is not doxastic deliberation when one deliberates about whether to manipulate oneself into having a belief. You can only deliberate about what to do (or judge), and forced beliefs are not things you do, they’re something that happens to you – perhaps as a result of some other action that you do. In any case, one can no more deliberate about whether to have a forced belief than one can deliberate about whether to be hungry. In either case, what one is really deliberating about is whether to act in such a way as to bring the state about.
Once this is clarified, transparency does seem undeniable. The question we then face is how to explain it. Shah’s answer is that “it is analytic of belief that it ought to be true”.[6] Recognition of this normative requirement prevents us from deliberately believing for non-truth-indicative reasons. This is not to say that our beliefs always do track the evidence. Shah emphasizes that transparency only holds in deliberative contexts, and this is part of what needs explaining. Our other belief-forming mechanisms are not so pure. We can be influenced by wishful thinking, confirmation bias, and so forth. It is a virtue of Shah’s account that he can explain this. By locating evidential norms in the concept of belief, this explains why transparency is only found in the context of doxastic deliberation, where we conceive of our beliefs as such, and not in sub-personal mechanisms that do not exercise the concept of belief.[7]
But Shah’s answer is still too strong. If it were analytic that beliefs ought to be true, then it would be incoherent to assert “p is false but S ought to believe it”. But this statement does not seem incoherent. For one thing, not all false beliefs are irrational. If we know that S has been exposed to a great deal of compelling but misleading evidence, then we might well judge that S ought, as a matter of epistemic rationality, to have the false belief. Shah might accommodate this objection by tweaking his norm slightly. Still, the false accusation of incoherence will remain for examples involving purported practical reasons for belief. Many of us would judge that an agent ought to have a false belief if the fate of the world depended on their doing so. The agent in question might even agree, and wistfully respond, “Yes, it would be best, all things considered, if I were to believe p. How unfortunate that I know it to be false!” Evidentialists will insist that we are mistaken in our judgments here, but to accuse us of self-contradiction is an implausibly strong claim for them to make.
This problem can be avoided by changing the proposed norm of belief. Rather than aiming at the external goal of truth, the appropriate norm might instead appeal to internal coherence, as described earlier in this essay. This would equally forbid what Foley calls “near-contradictions”, i.e. believing p whilst also believing the evidence to indicate that p is likely false.[8] But while it forbids believing near-contradictions, it does not forbid straight falsehoods. So this alternative has all the advantages of Shah’s account, without the above drawback. It can explain transparency without denying the coherence of the statement “p is false but S ought to believe it.”
Even worse, in light of these examples, Shah’s account can no longer explain transparency at all. It is possible, if perhaps mistaken, for us to take practical reasons as bearing on what another deliberator ought to believe. But the concept of belief is engaged in third-personal judgments just as in first-personal ones. Indeed, as noted above, there may be many cases in which we exercise our concept of belief without necessarily feeling bound by evidential norms in judging what ought to be believed. So it cannot be concept-mediated recognition of the normative hegemony of truth that explains why transparency occurs in first-personal doxastic deliberation. Third-personal deliberative judgments may also involve the concept of belief, but no such norm is necessarily recognized. So Shah’s account fails. An adequate explanation for transparency must rest on a feature that is unique to first-personal doxastic deliberation, and his does not.
Focusing on the first-personal aspect of transparency can also serve to bring related phenomena to our attention. Consider the Moore-paradoxical incoherence of asserting “p, but I lack sufficient evidence that p is true.”[9] Or consider the related principle: It is impossible to believe p whilst recognizing that one lacks adequate evidence for the truth of p.[10] Just as we cannot knowingly believe falsehoods, so we cannot knowingly believe what could very well be false for all that the evidence shows.
All three phenomena have the same root explanation. To deliberately believe p – that is, to believe p as a result of doxastic deliberation or rational reflection – is to judge that p is true. If one recognizes that the evidence is against p, then one cannot coherently judge that p is true, and so cannot deliberately believe p. As for practical reasons, while they may lead one to judge that p would be good to believe, they have no bearing on rational judgments whether p is true. But again, it is this latter judgment that is constitutive of deliberative belief, so this explains why practical reasons can have no influence over our doxastic deliberations. This is no merely contingent fact about human psychology. Rather, it is a conceptual fact about deliberate belief that it arises through settling the question of what is true. Even if we imagine creatures that could respond to practical reasons and bring themselves to have a belief through sheer force of will (immediately forgetting about its origin so as not to undermine the new belief), this would be no different in principle from taking a magic pill.[11] Their intention action is still getting to believe, rather than believing itself, for they never made an intentional judgment that p is true. Instead, after judging that p would be good to believe, they acted on themselves – through sheer force of will – to bring it about that they held this belief.
We are led to conclude that we cannot deliberately believe for practical reasons. But what then? Evidentialism only follows if we assume the internalist claim that a reason for S to Φ must be capable of being a reason for which S Φs.[12] But this claim is mistaken. A consideration may count in favour of Φ-ing even if that consideration is necessarily inaccessible to the agent herself. For example: Suppose that God will reward people who act from selfless motives. This is clearly a reason for them to be selfless. But it is not a reason that they can recognize or act upon, because in doing so they would be acting from self-interest instead. They would no longer qualify for the divine reward, so it would be self-defeating to act upon this reason. In effect, the reason disappears upon being recognized. Nevertheless, it seems clear that, so long as the agent is unaware of it, the divine reward is a reason for them to act selflessly. So internalism is false. Just as there can be unknowable truths, so there can be inaccessible reasons. This doesn’t seem to change when we modify the example so that God rewards people who believe what the evidence indicates to be true. The practical reward counts in favour of these beliefs, even if the agents could never recognize or act upon this reason in their doxastic deliberation. No doubt there is much more to be said here, but the debate over internalism goes beyond the scope of this essay. Let us simply note that Shah’s internalist premise is certainly open to contention.
If we reject internalism then there is no longer any conflict between Musgrave’s distinction and Shah’s phenomenon of transparency. We may grant that agents cannot deliberately believe p without taking themselves to have settled that p is true, yet still hold that there can be other reasons for belief besides those that are potentially accessible to the agent. More generally, it may help to expand Musgrave’s distinction to include a third type of belief-related reasons. The core controversy is over reasons for (the act of) believing. We have more clear cut cases at either extreme: reasons for the truth of the thing believed are clearly just truth-indicative reasons, and reasons for getting oneself to believe can clearly include practical reasons. Whether one thinks that any of these practical reasons can also count as reasons for believing per se, will probably depend upon whether one rejects internalism. But even if one rejects pragmatic reasons for belief, I have also pointed to the possibility of epistemic yet non-truth-indicative reasons for belief. These can arise from the normative force of internal coherence, a force which applies independently of external goals such as truth. Thus it seems that the weight of evidence is against evidentialism, and that we can indeed have reasons for belief that are not reasons for the truth of the thing believed.
[1] Musgrave, p.21.
[2] Shah, ‘How Truth Governs Belief’, p.2.
[3] Musgrave, p.24, original italics.
[4] Scanlon, p.20.
[5] Shah, ‘How Truth Governs Belief’, p.1.
[6] Shah, ‘How Truth Governs Belief’, p.44.
[7] Ibid, pp.25, 34.
[8] Foley, p.215.
[9] Adler, p.272.
[10] Ibid, p.273.
[11] Hieronymi, p.24.
[12] Shah, ‘A New Argument for Evidentialism’, p.5, uses this assumption as an undefended premise in his argument.
Bibliography
Adler, J. (1999) ‘The Ethics of Belief: Off the Wrong Track’ Midwest Studies in Philosophy, 23: 267-285.
Foley, R. (1987) The Theory of Epistemic Rationality. Harvard University Press.
Hieronymi, P. (forthcoming) ‘Controlling Attitudes’ Pacific Philosophical Quarterly.
Musgrave, A. (2004) ‘How Popper [Might Have] Solved the Problem of Induction’ Philosophy, 79: 19-31.
Scanlon, T. (1998) What We Owe to Each Other. Harvard University Press.
Shah, N. (2003) ‘How Truth Governs Belief’ in PHIL 471 Course Reader (also published in Philosophical Review).
Shah, N. (forthcoming) ‘A New Argument for Evidentialism’.
0 comments:
Post a Comment
Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)
Note: only a member of this blog may post a comment.