It's tempting to interpret the Equal Weight View (EWV) as offering positive normative advice: 'when you disagree with someone you take to be an epistemic peer, you should split your credence equally between both your conclusions.' But this would lead to implausibly easy bootstrapping. (Two creationists don't become reasonable after splitting the difference between each other. It's just not true that what they (epistemically) ought to do is to give equal credence to both their unreasonable views. Rather, they ought to revise/reject their prior beliefs altogether. Cf. wide-scope oughts.) To avoid this problem, Adam Elga restates EWV merely as a constraint on perfect rationality. That is: if you fail to split your credence in this way, then you're making some rational error. But even if you satisfy the EWV constraint, you might be making some other, more egregious, error. So it doesn't follow that, all things considered, you ought to follow EWV.
Or consider Roger White's argument against imprecise credence. It shows that we're "irrational" (i.e. imperfectly rational) to have other than perfectly precise credence in any proposition. But given our cognitive limitations, I expect we'd do even worse if we tried to give a precise credence to every proposition under the sun.
The fact is, we're not ideal agents. We have no hope whatsoever of being perfectly rational. And this leads to the problem of second best. That is, attempts to conform to norms of ideal rationality may end up leading us even further away from that goal. What we really need are norms of non-ideal ("second best") rationality, that recognize that we will make rational errors, and so incorporate strategies for recovering from such errors. In other words, we need to know what to do in case we are in an irrational position to start with -- how can we revise our beliefs so as to make them less irrational? Bayesian updating and other rationality-preserving rules are no help at all when your initial belief state has no rationality to preserve.
[I'm sure this isn't an original observation. I know many moral and political philosophers are interested in non-ideal theory. I'm just less familiar with epistemology. Can any readers point me in the direction of epistemologists who work on non-ideal theory?]
I tend to phrase this issue in the following way. It's easy to prove that certain surface features emerge from perfect rationality, but that gives us only very weak evidence that by manipulating the surface features of our deliberative practice we will move closer to perfect rationality. Very frequently, when we display surface features that conflict with perfect rationality this may be as a result of crude hacks that prevent us from being exploitable by other equally crude hacks. Eliminate the ugly surface features and we are still irrational, but are also vulnerable to crude manipulations that wouldn't otherwise cause us trouble.
ReplyDeleteIf your initial state had no info whatsoever, sure, you'd be a pretty sad state. But if you had just made some rationality errors you might treat yourself as a Bayesian wannabe and reason accordingly.
ReplyDeleteHi Richard,
ReplyDeleteI agree with you that Adam's move (to EWV as one constraint) is interesting. I haven't thought it through yet, but here's a couple of suggestions I would be keen to hear your thoughts about.
Firstly, a datum worth flagging, and which Adam agreed to in conversation, is that this move undermines the advantage of EWV as an internal decision making procedure. In conversation with people I usually find that the readiness of the prescription in difficult cases - "split the difference!" - butresses their belief in EWV. But of course this is no longer the case. Now we should only split the difference if various other conditions are satisfied. This may undermine EWV's intuitive appeal.
Second, I wonder what sort of other rational constraints EWV is consistent with. Obviously Adam has something or other Bayesian in mind. But, at a stretch, might this condition be consistent even with Williamson-style non-phenomenalism?
Sorry not to speak to your main point.
And thanks for the word on blogging!
Strategies for recovering from rational errors? Common sense springs to mind, fuzzy as it is, and such applications of it as the actual practices of practiciing scientists; so maybe those studying the latter would have some ideas, but I don't really know. Self-criticism, imagination, objectivity and so forth? I'm reminded of how we can always resolve an apparent contradiction by discovering an equivocation (and so, for example, precisifying predicates). Maybe there can't be a theory of it until we know everything; which makes me a mysterian I guess. (Maybe teachers know about such things, in a practical way, and theorists of teaching know of such?) Your question is certainly the most important one in modern philosophy!
ReplyDeleteBarry - yeah, I like your first point. On the second, a mere constraint is technically consistent with any other constraints whatsoever. It's just that we risk ending up with a set of constraints that cannot possibly all be satisfied at once. So I take it you're really asking what other rational constraints are co-satisfiable with EWV. I guess we can rule out any constraint which asks us not to ever depart from what our first-order evidence supports. But other than that, I'm not sure...
ReplyDelete