Kahneman and Tversky famously found that most people would prefer to save 200 / 600 people over a 1/3 chance of saving all 600, and yet would prefer a 1/3 chance of none of the 600 dying over a guaranteed 400/600 deaths. This seems incoherent, since it seems our preferences over a pair of options are reversed merely by describing the very same case using different words.
In 'The Asian Disease Problem and the Ethical Implications Of Prospect Theory' (forthcoming in Noûs) Dreisbach and Guevara argue that the folk responses are compatible with a coherent non-consequentialist view. Their basic idea (if I understand them correctly) is that the "400 will die" case is suggestive of a different causal mechanism: perhaps the 400 die from our intervention, so the choice is between guaranteed or gambled harms, whereas the "saving" choice is between guaranteed or gambled benefits. They then suggest that non-consequentialist principles might reasonably mandate a special aversion to causing guaranteed harm (and so think it better to risk harming either all or none, despite no difference in expected value between the sure thing and the gamble). In the first case, by contrast, they suggest that non-consequentialists might think it easier to justify saving some lives as a "sure thing" rather than taking a gamble that would most likely save nobody at all.
Such non-consequentialist principles sound pretty odd to me, but let's grant them for sake of argument. I think that D&G's argument fails for the simpler reason that we can easily clarify the thought experiment so that there is no change in causal mechanisms between the two cases. Rather than merely specifying that "400 will die", we may clarify: "400 will die of the disease." I trust that this makes no intuitive difference to the case: So long as the scenario is framed in terms of how many people die of the disease rather than how many are saved from it, we prefer to gamble with lives in a way that we otherwise would not. The inconsistency cannot be explained away by positing that our intuitions are tracking the deontological property of whether we cause a harm, for it evidently remains even when there is no such causal discrepancy between the cases.
So it seems that our prima facie intuitions here are simply incoherent -- and responsive to arbitrary framing effects -- after all. Or am I missing something?
Monday, October 02, 2017
Harms, Benefits, and Framing Effects
4 comments:
Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)
Note: only a member of this blog may post a comment.
Subscribe to:
Post Comments (Atom)
I don't think your proposed revision actually addresses the issue, which is that a very common natural reading in ordinary contexts of
ReplyDeleteIf A is done, B will happen
is that by doing A, one causes B. This is not affected by simply revising the consequent. That is, at least one natural reading of
If program A is adopted, 400 people will die of the disease
is that the adoption of program A causes 400 people to die of the disease. Same problem, assuming the diagnosis is correct; literally the only information people have about the program is that it would (on one reading) cause 400 people to die of the disease. And surely it's not surprising that they would be inclined to think that a bad deal.
Independently, I wonder if there might be an effect I notice with my students on trolley problems: if they are not explicitly told that something's guaranteed, they often assume that it need not be. So it could well be that people are reading '400 will die' as 'at least 400 will die' and '200 will be saved' as 'at least 200 will be saved'), which also breaks the symmetries.
Hmm, good point. I wonder if further refinements could avoid the natural causal reading? (The guarantee we can add in easily enough by specifying that 'exactly 400 will die'.) Maybe something like: "If program A is adopted, there will still / nonetheless be 400 people who die of the disease." That no longer sounds like A causes the deaths, right?
DeleteThat seems plausible. The thing that's tricky is that (for this) you need the qualification simultaneously to neutralize any causal implicature and not ruin the symmetry that's important for the Kahneman-Tversky set-up. Offhand, I can't think of any problem a still or nevertheless qualification would cause for either one, so it looks like it would would work.
DeleteI think the guarantee requires that the causal context already be neutralized for any specification to work (otherwise the specification gets confined to the causal context), but with still or nevertheless that doesn't look like it would be a problem, either.
I don't know about the full context of the question but I wonder whether nonlinearity plays a role. For example loosing the last members of a group may play a role. And the chosen answers seem to prevent that.
ReplyDelete