Many consider Nozick's "utility monster" -- a being more efficient than ordinary people at converting resources into wellbeing, with no upper limit -- a damning counterexample to utilitarianism. It doesn't seem intuitively right, after all, to give all our resources to this one individual and deprive everyone else in the world, even though this would (ex hypothesi) maximize aggregate welfare.
A standard response is to question the coherence of the scenario. It doesn't even seem like a good outcome, after all, which may be taken to cast doubt on whether we are really imagining sufficiently high monster welfare to really outweigh all the suffering of everyone else in the world. More directly: I don't think I can positively conceive of arbitrarily high welfare packed into a single life. Further, I think there are principled reasons to think this impossible -- but even if I'm wrong about that, our imaginative resistance is enough to explain away our intuitions about the putative utility monster.
An interesting way to support this response, I think, is to consider a twist on the case where the utility monster begins from a baseline status of arbitrarily massive suffering. If we now imagine that any given resource can either make any other person a little happier or else make a much greater impact on relieving the suffering of the negative utility monster, it seems clear that the latter option is the morally better way to go.
So everyone must allow that it's possible (not necessarily objectionable) for the interests of a single individual to outweigh everyone else's put together, if the stakes for that individual really are sufficiently high. That is, I take it that everyone should agree to the utilitarian verdict about the negative utility monster (at least if we're imagining that all resources are initially unallocated, to bracket any general concerns some have about redistribution). But if so, this would seem to seriously undermine the original utility monster objection. For what we're left with is no principled objection to letting one trump many, but just a standard prioritarian intuition that benefits to the well-off matter less.
This seems much less threatening to utilitarianism, in that it's easier to dismiss prioritarian intuitions as just the erroneous crystallization of some sensible utilitarian heuristics, e.g. regarding the diminishing marginal utility of resources, and the idea that there's generally more you can do to improve the welfare of the worse-off compared to the better-off.
P.S. Anyone know if this argument has been made before?
Thursday, August 02, 2018
Negative Utility Monsters
Related Posts by Categories
4 comments:
Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)
Note: only a member of this blog may post a comment.
Subscribe to:
Post Comments (Atom)
Curious theory, certainly one that's given me a lot to think about.
ReplyDeleteIf the monster's experience is entirely negative then killing it will bring the greatest good. Giving the monster all the cake in the world might bring it's happiness level from negative one billion to a mere negative one hundred; but killing the monster brings it's happiness level to zero (presumably after a brief dip). Meanwhile everyone else gets to enjoy their cake.
In answer to your actual philosophical point (assuming I've untangled it correctly, your phrasings are a bit confusing at times): certainly it's possible for one individual to have a greater utilitarian worth than any and all other individuals; and this does indeed make the utilitarian response to utility monster objection into something along the lines of:
"well of course we'd give all the cake to the monster after we've confirmed it really does enjoy it billions of times more than we ever could. Why would you ever think we wouldn't?"
Like all moral codes there are a lot of variations of utilitarianism: how important is the 'greatest number' part of the 'greatest good for the greatest number' thing; and so on. In short I feel like most utilitarians would agree with your point in theory and would put it into practice in situations that are applicable in the real world.
What would happen if they ever actually encountered a utility monster is anyone's guess, though.
I've been writing and deleting possible points I could raise in this comment for ten minutes now; it's been fun.
I think I've said all I want to say now, although I've probably forgotten something.
Thanks for the post.
-4Dragons
Hi, right, for the scenario to work you need to assume the monster is unkillable.
DeleteWow, this is a great point. I'm not sure, but I think it is original.
ReplyDeleteThanks! Working on turning it into a brief (Analysis-style) paper now, so we'll see how that turns out...
Delete