The Asymmetric No Independent Weight View claims that both parties should believe whatever the first-order evidence in fact supported all along. So if you were reasonable in your initial assessment of the evidence, you can reasonably give no weight at all to my (unreasonable, as it happens) opinion.
A third option, Tom Kelly's Total Evidence View, allows that we often should take higher-order evidence into account -- for peer disagreement is evidence that we have evaluated the first-order evidence incorrectly. However, TEV denies that this automatically swamps all first-order evidence. Instead, what's reasonable to believe is determined by one's total evidence (both first- and higher-order, combined).
When put like that, TEV seems clearly right. But it may be unhelpful in practice, if we cannot actually tell where the weight of (first order) evidence lies. Here are some fun cases from last week's epistemology class which put pressure on the view:
Case 1 (ordinary peer disagreement): You and I are expert weather forecasters with the same data set, training, and track record. Nonetheless, we disagree in our current forecasts. Should we revise our judgments on this basis?
Case 2 (chancy curse): As above, but an oracle tells us that long before we were born, a fair coin was flipped which would definitely curse one of us with all the phenomenology of having evaluated the data correctly, but ensure error.
Case 3 (simple curse): As in case 2, but without the coin flip. That is, all we know is that it was determined long ago -- by some unknown method -- that one of us would have all the phenomenology of having evaluated the data correctly, but be guaranteed to make an error.
It seems like the higher-order evidence swamps in case 2, obliging us to split the difference. But then case 3 may seem relevantly similar, and this in turn is effectively no different from case 1 (esp. if determinism is true). So if we want to avoid the Equal Weight View, we must either reject the intuition that we're rationally on a par in case 2, or else explain how to avoid the slippery slope through case 3. Any suggestions?
See also: How Objective is Rationality?
"When put like that, TEV seems clearly right."
ReplyDeleteIt does? I tend to think the equal weight view looks clearly correct. Could you perhaps motivate the superiority of TEV a bit more?
Alex
(Also, given that you have disabled anonymous comments, perhaps you could also remove the word verification?)
Well, it seems odd to give the first-order evidence no weight. Given that there is both first- and higher-order evidence on the table, it would seem appropriate to take all this evidence into account.
ReplyDelete[I'll try removing word verification. But spam bots may be equipped with a Blogger account, so if they attack I'll have to switch back.]
Why should I change my conclusions at all? If I am in a disagreement with someone, I want to find out who is right.
ReplyDeleteIf that person can provide a reason for me to change my mind, either to his or her way of thinking or to `splitting the difference', then he or she should do it. Otherwise, let the best man or woman win.
It's not a competition. Presumably you (both) want to believe whatever is true. So if a reliable epistemic peer comes to some conclusion, their reliability serves as a kind of higher-order evidence that the conclusion in question is true.
ReplyDelete(This is especially obvious when you are not peers, but the other person is your epistemic superior. If I believe some claim in physics, and Einstein disagrees, then - all else equal - that should lead me to change my views!)
For case 1:
ReplyDeleteif I cannot find a fault with the procedure by which you arrived at your prediction, nor a fault in my own procedure, nor any other reason to prefer one prediction to the other, and the two predictions are radically divergent, I'd say the most natural conclusion would be that I'm confused and/or ignorant in some important respect that I cannot precisely pinpoint. Thus the most rational decision for me would seem to be to not believe either of our predictions. (This is the opposite of the conclusion that we should "split the difference".)
The exception to this would be if I had good reason to believe that one of us was very likely right. In that case it might be more rational to just arbitrarily choose to believe one of the predictions (this would give me a 50% chance of very likely being right).
I think the same line of reasoning might also hold in the other two cases Richard presents. Anyway, I'm not sure which of the alternatives the above reasoning is in accordance with, though it seems to me it's TEV.
I need to ask a question for clarification, however. Is the Asymmetric No Independent Weight View supposed to be an externalist view? (i.e. I should believe the evidence I am externally justified in believing, even if I'm ignorant of this justification?)
Not exactly -- note that even internalists hold that you should believe what is justified by your (internally accessible) evidence, even if you happen to be subjectively "ignorant of this justification". For example: Coherentism - the claim that you should believe what best coheres with the rest of your web of beliefs - is as internalist as they come. But a proposition may cohere with my other beliefs without my appreciating this fact about my internal states.
ReplyDeleteP.S. I would classify 'suspending judgment' as a form of Equal Weight View, since you are not taking into account any first-order evidence. (That is, you do not allow any asymmetry of response between the person who was actually right as opposed to the one who actually made the error. You give equal weight to both.)
Fair enough. Point taken regarding the possibility of ignorance for internalists. Nonetheless, the big difference between externalists and internalists is that internalists hold that they are at least in a _position_ to know if they are justified in their beliefs, even if they may be ignoring relevant evidence due to negligence or something like that.
ReplyDeleteExternalists, on the other hand, hold that sometimes there is no way to ever know that one is justified, even in principle. It seems the assymetry you describe in the case of the Asymmetric No Independent Weight View analysis of the forecasting example, where one forecaster is justified (based on first-order evidence) and the other is not, assumes this kind of externalist view of justification -- that is, neither forecaster could ever find out how they were right or wrong by re-examining their methods, but one of the forecasters is nonetheless justified in believing his/her method on externalist grounds.
re-P.S. I do think that the decision process I describe above takes into account first-order evidence. The fact that you and I are both respectable forecasters who disagree on a prediction should prompt us to re-examine the first-order evidence and the methods we used to analyze it. If we still cannot find any errors in either of the two methods by which we arrived at our competing descriptions, then we clearly aren't really understanding what's going on. It might still turn out to be the case that one of our predictions is (coincidentally) true (and it might even -- from an external vantage point -- be the case that one of the methods is in fact correct, although we are in no position to ever find this out). But we would still not be justified in believing this prediction (at least from an internalist standpoint) since we cannot decide what's right or wrong about our differing methodologies.
In other words, this line of reasoning does not recommend a suspension of belief regarding which one of the two predictions is true (which would be an Equal Weight View), but rather an abandonment of belief in either prediction altogether.
You say:
ReplyDelete"It's not a competition. Presumably you (both) want to believe whatever is true."
Sure, I like to believe the truth, but I base my views on reasons, not other people's views. How does someone disagreeing with me without giving me any reason for their disagreement have any affect on how I came to my conclusions?
You say that this person's position can be considered higher order evidence. I'm not sure how: The knowledge that some smart person disagrees with you could cause you to reevaluate your position, but if no new evidence is found, then you should reason the exact same way. Some smart person disagreeing with you is the only new evidence you have. If that is enough to make you revise your position, then your other reasons for holding it must not have been all that strong.
you say:
"(This is especially obvious when you are not peers, but the other person is your epistemic superior. If I believe some claim in physics, and Einstein disagrees, then - all else equal - that should lead me to change my views!
If your epistemic superior cannot give you any reasons for why you are wrong and he or she is right, you should definitely not change your views based upon their perceived reputation. If your epistemic superior told you to jump off a bridge, would you?
If I knew that my epistemic superior was speaking in good faith, and was better than I at discerning when people ought to jump off bridges, then it would be foolishly stubborn of me not to at least consider it! :-)
ReplyDeleteSeriously, though, you should base your views on evidence, and the testimony of a reliable source is (ipso facto) evidence.
Suppose a 100% reliable oracle tells you that you will be die in a crash if you drive to work tomorrow. You ask for reasons, and the oracle won't share: either because she can't be bothered, or because she doesn't fully understand how her own inner workings operate. Nonetheless, you'd be a fool to drive to work tomorrow.
(And the essential point stands even when we decrease the oracle's reliability slightly. You should no longer be certain she's right, of course. But you should give some credence to her judgment, proportional to how reliable an epistemic agent you think she is, mitigated by any specific reasons you have for thinking she is more likely than you to be mistaken in this particular case.)
I think we are now in agreement: I never said not to consider what the other person said, and that that "Some smart person disagreeing with you is the only new evidence you have." It just seems to me that if someone merely disagreeing with you causes you to change your mind, you are horribly weighting this person's evidence as compared to what you previously thought. It looks as if you have no faith in your own reasoning.
ReplyDeleteAs for 100% reliable Oracles, these don't exist. The best we can do is get people like Einstein. And I didn't say not to believe people like this, but that you shouldn't just take everything they say on reputation; Einstein made mistakes too. If a respected teachers is too busy and says 'Take my word for it,' I'd expect you to do so, but then to follow up later when the person is less busy. At best we can look at the evidence of him or her believing something as a reason to hold off from making a decision, not to immediately revise your views.
"...You ask for reasons, and the oracle won't share: either because she can't be bothered, or because she doesn't fully understand how her own inner workings operate."
ReplyDeleteAh! This kind of provision obviously renders my previous analysis of the weather forecasting case completely irrelevant. I had assumed that there was a free flow of information between the agents. Silly me.
Richard said: "So if a reliable epistemic peer comes to some conclusion, their reliability serves as a kind of higher-order evidence that the conclusion in question is true."
ReplyDeleteI don't think this can be accepted as generally true; it's not a higher-order evidence that the conclusion is true but that no obvious mistake was made, in exactly the same way checking your work (or having someone check it) is. Similarly, if there is disagreement, this is not evidence that you are wrong, but that there is (assuming that the difference can't be explained wholly in terms of difference of perspective, e.g., you standing there while I stand over here) a potential pitfall somewhere, of the sort that people of your ability and background can fall into, and that you might (but might not) have fallen into it. Similarly with epistemic superiors (mutatis mutandis).
Because of this I think that none of these cases gives us any reason to revise or split the difference (they do, however, give reasons to consider whether we need to revise, which is altogether different). I see no good reason for 'splitting the difference' under any circumstances.