(A) Tiny chances. Here Nefsky adverts to Budolfson's arguments that we might learn details about buffers in the supply chain (etc.) that allow us to be disproportionately confident (beyond what raw averages would lead us to expect) that we are far from the collective threshold for triggering a change in production levels. Budolfson's arguments are theoretically interesting, but not obviously applicable in practice. They depend upon our collective consumption levels being stable or otherwise highly predictable across time (otherwise we couldn't be so confident that we're still far from the relevant thresholds). But in the face of social movements encouraging people to address collective action problems (e.g. by eating less factory-farmed meat), I'm not sure that the relevant degree of consumer predictability is satisfied. Especially for those of us who are considering making consumption changes in our own lives on the basis of moral reasons, it seems reasonable for us to be highly uncertain about how many of our fellow citizens may soon do likewise. But then we shouldn't be so certain that the relevant thresholds will remain distant.
That's all a matter for reasonable dispute. What I found more striking about this section was Nefsky's suggestion that "Lomasky and Brennan (2000, section IV) raise similar worries in the voting case, arguing that when all relevant empirical factors are taken into account, the expected utility calculus will rarely come out in favor of voting." Reading the relevant section, they don't appear to be "similar worries" at all. L&B's argument in that section has nothing to do with tiny chances. They are instead arguing that we typically shouldn't be confident which electoral outcome would actually be better. Their skepticism targets our evaluative beliefs, not our efficacy, and would apply with much the same force even if your vote was certain to be decisive. So I think it's very misleading to cite them (without explanation) in this context.
[This is a common sort of conflation: I often see libertarian friends on facebook making inefficacy-based arguments against voting -- "Your vote doesn't make a difference" kind of rhetoric -- and when I point out that this is demonstrably mistaken, they retreat to the evaluative-skepticism argument, seemingly without realizing that they have switched to an entirely new argument. My advice to these friends: lead with the evaluative skepticism, and drop the dodgy inefficacy arguments entirely!]
(B) Tiny Differences. Nefsky argues as follows:
Take a variant of Harmless Torturers in which there is only one victim. The expected harm any given torturer would do in this case is just 1/5th of a barely perceptible difference in pain, since there is just one victim and so nothing to aggregate. So as long each of the thousand torturers gets some clearly perceptible benefit from participating (even just a nice back massage), it would seem that each of them is acting in a totally acceptable way on the expected utility approach.
I think the right answer here depends upon details such as the duration of the tiny pain increase, and perhaps the victim's baseline welfare-level. After all, even a tiny increase in pain could constitute a serious harm if it persists for decades. It could then easily outweigh the transitory pleasure of a brief massage. A very brief tiny pain, by contrast, might be worth enduring for the sake of some larger benefit. But those with prioritarian intuitions may still reverse even this judgment in cases where the brief tiny pain is going to someone much worse off than the recipient of the larger benefit. So it is far from clear that consequentialist evaluation must yield the verdicts in this case that Nefsky assumes.
So again, the issue here is actually nothing to do with inefficacy as such. Nefsky is effectively inviting us to imagine a case where the torturers gain more (in aggregate) than their victim suffers. That is, it's a rehash of Scanlon's famous transmitter-room case. That's an interesting and important case, raising dueling interpretations of whether aggregation or priority-weighting is doing the key normative work. But again, it seems a bit misleading to invoke it in this context, since the issue isn't inefficacy (unless Nefsky is somehow imagining that the victim's aggregate pain is greater than the aggregate benefit, despite each individual pain increment being lesser than each individual benefit, which I submit is a plainly incoherent description of the scenario).
Overall then, I don't think we need to feel too troubled about the prospects for the 'expected value' response to inefficacy objections. Nefsky's skepticism here doesn't appear to be particularly well-grounded.
0 comments:
Post a Comment
Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)
Note: only a member of this blog may post a comment.