In the comments at Agoraphilia, I outline a rough specification of the sorts of objective facts that could serve as truthmakers for interpersonal utility comparison (IUC) claims, in hope of making them less mysterious. (Some people hold such comparisons to be impossible in principle. I want to claim that the difficulty is merely epistemic.) In short, my strategy is to convert the IUC into a hypothetical intra-personal utility comparison, by appealing to the global preferences of an idealized agent who gets to experience both lives sequentially. (Like my God, say.)
This seems clearly unproblematic at least for simple hedonistic theories of welfare. Our hypothetical agent can easily compare two experiences across the lifetimes, and determine which is the more pleasurable. But will it still work once we bring in other values? If the two original individuals had very different global preferences, with what "common currency" can we compare them? How could our idealized agent choose between them fairly? (To adopt either preference system would seem to unfairly exclude the other.) We might worry that the two are simply incommensurable.
Indeed, the same problem arises within a single life, if the person endorses different value systems at different times. Perhaps the young idealist most wants to have a positive impact on the world, whereas his older self would prefer to live a comfortable life and look out for his family. Each thoroughly rejects the values of the other. Which lifestyle would be "best" for this person? Here I learn towards the Parfitian response of considering them to be two distinct persons. That allows us to say what is best for each, but it remains unclear how we are to weigh the relative costs and benefits between them, so as to determine what would be best overall.
The problem could be overcome if we assume convergence of idealization. If there is just one maximally coherent and unified desire set, just one value system that an idealized (perfectly rational and fully informed) agent could hold, then the ideal agent could adjudicate the dispute. We could ask him: "Supposing that you will experience this life, first from the young man's perspective and then from the elder's, how do you want it to go?" This yields a determinate answer which can be used to weigh the conflicting interests authoritatively. The idealized young man and the idealized old man would both agree, for we have supposed that idealization would cause their preferences to converge.
But what if such ideal convergence would not, in fact, occur? The two idealized selves would continue to disagree about how to weigh the various tradeoffs. In cases of such persisting disagreement, it seems we must conclude that there is no absolute fact of the matter about which harm or benefit is the greater. In those -- perhaps rare -- cases, the welfare facts would be agent-relative (but in a sophisticated way).
That seems an odd result. Perhaps it arises because I am conflating the distinction between one harm "factually outweighing" another (i.e. being a greater harm) versus "morally outweighing" it (i.e. being the more important harm). Perhaps the appeal to idealized preferences really latches on to the latter kind of assessment. But then how are we to get a grip on the former class of facts? Suggestions welcome...
[Thanks to Blar for bringing these problems to my attention.]
Saturday, July 15, 2006
Utility Comparisons and Disagreement
Related Posts by Categories
5 comments:
Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)
Note: only a member of this blog may post a comment.
Subscribe to:
Post Comments (Atom)
For me, this sort of thought came from the old question where you can wish for anything and you want to make sure you wish for the best thing (but have difficulty defining what that is) and thus try to create an idealized you.
ReplyDeleteAnyway,
1) I think it is harder than it sounds to idealize you without some unintended consequences (or destroying the value of the process) - the process of idealization is like asking the (misleading) djinni for a wish.
It reminds me of an exam I once sat that asked how a car would work if there was no friction - I was inclined to say something between "the car would fall apart" and "the driver would be dead so it wouldn't matter".
2) Preferences are a variable as well as a determining factor (particularly at the god level), you analysis would be prone to falling apart into hundreds of equally valid answers - potentially as many as there are potential preferences.
The problem in a sense is you will probably end up basically unrecognizable - (and yet it is not entirely clear why this would be a problem since the decisions and the effect are in a sense not sequential).
-----------
In real life situations this is not so hard because you have a limited set of tools at your disposal. These might include "charitable donations" or "calling social welfare".
And one of the important things for achieving your own goals is gaining the support of a reasonable amount of other people (if they are effected) so generally all individual's best interests are served by some sort of a negotiation as to what the rules should be and then applying those rules and we don’t need to worry about what happens when everything becomes a variable.
Of course what this leaves in the system is power imbalances which artificially weight the comparisons.
the next thing you could do is use whatever system you use to differentiate humans from not humans to grant preferences (biting a big bullet here) and use that to give you a sort of "amount of money (power)" which you can "spend" however you like. the problem is this assumes a sort of omniscience again despite the fact this is supposed to be the non god related version...So another solution is to either artificially equalize those imbalances (e.g. via democracy or unions or whatever) on the assumption we cant efficiently grade humans on their worth (for policy or for technical difficulty reasons).
Richard, how do you make utility comparisons between the two classes of harms (or benefits) that you propose, hedonic experience (for any sentient being) and preference satisfaction (for persons)? For instance, can you compare a cow's pain with the global preference satisfaction of a man who wants to be a tough, rugged steak-eater? You can't rely on convergence of global preferences under idealization, because in the case of the cow there's nothing there to idealize. If this type of inter-sentient-being utility comparison is impossible, then that is a rather large gap in your theory.
ReplyDeleteYou might try to make the comparison by claiming that hedonic utility (pleasure or pain) has the same value regardless of what sentient being experiences it, so we can just make the comparison intrapersonal by transporting the pain into the person's global preference system. But that doesn't seem right - preferences do alter the value of pleasure and pain. For instance, pain experienced by the rugged man might sometimes be a net positive for him (or at least less negative than it would be for others), because it exemplifies his ruggedness. Maybe you can say that persons' hedonic experiences can be separated into two value components, the purely hedonic component (which represents its utility as an experience of pain or pleasure, and would be the same for any sentient being that had the same hedonic experience) and the preference component (which depends on the relationship between the experience and the person's preferences). But this attempt at a solution just returns us to the original problem, now in the intrapersonal realm. How do we make a utility comparison between these two components when hedonic experiences are considered entirely apart from preferences?
The objective of Utility Theory isn't to give a weight to the different experiences, it's to give them a ranking. That is, the preference function merely asserts that, of two possibilities, one is preferable to the other (or the agent is entirely indifferent).
ReplyDeleteEg. for ice-cream: {chocolate > strawberry > vanilla = coffee > mint}. Note that there's no way of asserting how much more I like strawberry than mint, just that I prefer it, since the preference is the only thing observable.
If we wish to move from ordinal to ratio descriptions, then we can "fudge" it (pardon the pun) by introducing arbitrary bundles of cash into the mix, and ranking those as well:
{$100 > 2 x mint > chocolate > $5.50 > strawberry > $5.45 > vanilla = coffee > mint > $2.25}
Hence, I prefer strawberry ice-cream to $5.45, but not $5.50. You can repeat this with arbitrarily fine granularities of cash, or quantities/multiples of ice-cream, to get a pseudo-scale or weighting.
In a similar vein, you can throw other possibilities/options into the mix, such as a stolen ice-cream, or a vegan ice-cream, and observe preferences under those conditions.
Separating it out into a "hedonic component" and "preference component" just doesn't make sense - preferences are fundamental, in the sense that they already take into account hedonic assessments as well as other value-imparting processes.
From that point of view your "young idealist/old comfort-seeker" dilemma (where - for some unstated reason - one has to make a single, life-long, binding decision about values to live by) boils down to asking which of the the two state of affairs is preferable:
(young idealist + old idealist)
vs
(young comfort-seeker + old comfort-seeker).
To answer that rationally, you would need to also look at the time-preferences of the old/young man. Ie the young man may have a view that next year is a long way off, and what happens in five years doesn't count. By contrast, the old man may take an inter-generational view and worry about what will happen in thirty years with his grand-kids.
A further factor is appetite for risk (risk-seeking vs risk-aversion). This might be captured through the notion of the marginal utility of money.
We shouldn't assume that any idealised super-agent making this assessment will have a risk-neutral view or an eternal time-horizon. Nor should we assume that it will be a simple weighting of the two (ie if you're a "young man" for ten years, and an "old man" for sixty, then the super-agent's time/risk preferences will be the 1:6 weighted-average).
I think these issues can be dealt with by plausible/persuasive heuristics, but I'm not sure that a compelling argument about how to resolve them can be made.
Hi Richard,
ReplyDeleteYour "deja-vu pantheist" thought-experiment is interesting. (Shades of Railton, here.) However, it seems to fall into a Parfitian kind of trap. The difference is that, while Parfit seems eager to describe the same person as different people, you seem to be willing to describe different people as the same person (at least for the purposes of the thought-experiment, for the purposes of making IUCs).
I would warn against the idea that the same person who has two different sets of preferences is therefore two different people. It is feasible to say that they have two different personalities or personas, but that's not quite the same thing. Going the Parfit route seems to underdescribe what it means to be a person, a self, and neglects the chain of experience that connects one to another. Or perhaps I misunderstood Parfit's point?
In any case, I'm not convinced that we need to abandon hedonistic utilitarianism for these preference-based solutions. Preferences certainly play a crucial role in the structure of a persona, but from a genuinely utilitarian perspective, they're ultimately subject to a hedonic analysis just like anything else. My stubbornness, of course, doesn't help us much with IUCs, unless we are humble in our attempts to predict what makes strangers happy and what doesn't, which is all we really can do.
One idea that comes to mind is that comparison of global preferences could be handled by using our more hedonic forms of utility as a crude form of "currency."
ReplyDeleteFor instance, let's suppose that I have a global preference that my first-order preference to play Candy Crush Saga be decreased and my first-order preference for writing novels be increased. I am willing to suffer no more than X units of pain in order to implement this reordering, where one unit of pain equals -1 utilons. Therefore, satisfying that global preference is worth X+1 utilons.
Now, obviously we would probably need a far more complicated exchange system of exchange than that in order to arrive at a system of comparison that is not too far out of step with our considered moral intuitions. But it's a start.
Another possible route for comparison is direct empathy. Suppose someone has a global preference that they desire to play videogames less and spend more time on writing. I'm sure there most people have found themselves in fairly similar situations in the past, where their first level preferences were out of line with their second-order, global preferences. And I'm sure in those situations you would be able to roughly determine how much utility you would give up to see your global preference satisfied. From there you can just use empathy to generate a normalizing assumption.