I really enjoyed Adrian Walsh's AAPC talk on thought experiments in applied ethics. He identified four general purposes for employing a thought experiment: as a counterexample to a universal claim; as an "intuition pump" to support a general claim; as a "principle cleaver", clarifying distinctions by separating variables that are commonly conflated in actual situations; and "reimagining" stale debates. Walsh suggested that "modally bizarre" examples are not intrinsically suspect. Rather, what matters is that it is used appropriately in the broader argument.
For example, suppose one argues that eating meat is (actually) wrong because it involves killing animals. Someone might respond that we can imagine futuristic scenarios where meat is grown in a vat, so no animals need be killed. But that thought experiment is irrelevant to the present argument. Sure, it shows that meat-eating is not necessarily wrong, but that wasn't the question at issue. The fact is that in the actual case it does involve killing, so imagining some other situation is merely changing the subject, and does nothing to show that meat-eating is not wrong in our actual circumstances. But the problem is one of relevance, not modal distance.
The obvious worry about bizarre cases is that they might over-stretch our intuitive competence. If our intuitions are reliable at all, perhaps it's because they've been honed by our experiences, producing a kind of philosophical "know how". But while this might yield reliable judgments for familiar scenarios, it's (even) less clear whether we are competant at making correct intuitive judgments in unfamiliar - and sometimes downright bizarre - circumstances. What reason do we have to trust our intuitions in such cases? (Or any cases, for that matter?) A pity I didn't think to raise these issues at the time.
I suppose an alternative view might see our intuitive judgments as arising from internalized general principles that are universally applicable, and thus should apply just as well to novel scenarios as familiar ones. Which picture do you think is more plausible? Are there other possible accounts that I've missed?
Anyway, back to the talk, one interesting issue that came out in comments was the importance of framing effects. Consider, for example, the Kahneman & Tversky experiments described here. One and the same scenario elicits different (conflicting) intuitions depending on how it is described. Assuming we reject ethical contextualism, one of these intuitions must be mistaken. So that would seem to cast doubt on their reliability. This is most obviously problematic for general "intuition pumps", but I think it's also a problem for 'counterexample' uses. If we have no determinate intuition about a scenario, then it's no longer clear that it can constitute a counterexample. Looking at it one way, you'll think that it does, but then seen from the other perspective you'll change your mind and think the universal claim holds true of this case too after all. When I brought up this point in comments Walsh wanted to deny this, and so had to "bite the bullet" and say that reframed descriptions were actually of different scenarios. But that seems most implausible to me.
Incidentally, in Weatherson's fascinating recent post on absolutism and uncertainty, he notes that "where we set the 'zero-point' or status quo makes a big difference for how we act." But is there really any fact of the matter about what the 'default' outcome is? The K&T case linked above would seem to suggest that this is merely a difference in our descriptions, not in reality. Whether you say that 400/600 will die, or that 200/600 will be saved, you describe one and the same fact. But the descriptions differ in their implied baseline, or what they convey as being "the natural progression of things". Could there be a metaphysical fact of the matter regarding where the true baseline lies? What sort of fact could this be, that would tell us whether a survivor was saved from the jaws of death, or whether it was simply a matter of death failing to cut his life tragically short? When faced with branching possibilities, how can we say that one is the "default" path of fate, and the other some kind of unnatural "diversion"? (But if we can't do this, then what is our basis for caring more about "losses" than "forfeited gains"? Doesn't this distinction require a baseline?)
Thursday, December 15, 2005
Intuitions and Framing Thought Experiments
Related Posts by Categories
1 comment:
Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)
Note: only a member of this blog may post a comment.
Subscribe to:
Post Comments (Atom)
I tend to agree with Kathleen Wilkes's argument in Real People that bizarre cases are intrinsically suspect, not because of our intuitions, but because our reasoning in such cases begins to collapse under the complexity of factors that need to be kept track of if we are to reason correctly about the situation. In a sense, we should regard our examples as somewhat chaotic, in the sense that a slight change in basic assumptions can lead to massive differences that need to be tracked; bizarre cases, however, require massive changes in basic assumptions, and we need to keep track of exponentially greater changes. It's true that we should be able to take into account new cases; but there's no need to go to a bizarre case. Faced with a scenario, we need to keep in mind what is necessary to analyze it asking:
ReplyDeleteHow close is it to real phenomena we actually encounter?
Do we have a theory of the phenomena sufficient for being able to understand what would happen if we suppose the differences that are proposed in this scenario?
What evidence do we have for thinking such a scenario is even possible?
And so forth. A Wilkes-type approach will use thought experiments, but will keep fairly close to real experiments (that have been done, or that might actually be done).