We have many prima facie goals. But what they all have in common is that we will be pleased to achieve them. The things we want are the things that will make us happy. So, it seems that it's really happiness that is our ultimate goal, and our more particular ends are merely instrumental or implementations of this more general end.
The argument can be clarified further if restated in terms of desire: we want our desires to be satisfied, therefore what really matters to us to to achieve desire-satisfactions, and the particular things we now want are merely instrumental to the ultimate end of satisfying our desires in general. N.B. This implies that I could make you better off by inducing in you a strong desire to count blades of grass (and granting you access to a lawn).
The fallacy here derives from a kind of scopal ambiguity. It's true that we want our desires to be satisfied. That's tautological: we want to get what we want. But that latter 'what we want' should be understood de re rather than de dicto. We want to get those particular objects that we want. We do not merely want to have any old satisfied wants (e.g. induced desires that don't relate to our existing goals or values at all). Put formally:
(*) [Those X: WANT(i,X)] WANT(i,X)
"Of those things that I [now] want, I want them."
not
(#) WANT(i,[Those X: WANT(i,X)] X)
"I want that I get whatever things I [then] want."
Once we observe this distinction, hedonism and (#)-type desire satisfactionism lose much of their appeal. Why think that what matters most is happiness or desire-satisfactions in general? It's not what we actually care about, after all. (I'd rather struggle to achieve some of my philosophical and personal goals than be a satisfied grass-blade counter.) Why should our counterfactual concerns outweigh our actual ones?
(I could understand it if they were objectively more meritorious, perhaps, but the idea here is that their mere strength suffices to make them more important than our actual concerns. Satisfying particular preferences takes a back seat to promoting preference-satisfaction in general, including by means of inducing new preferences. Compare G.A Cohen's objection to utilitarianism, which is a more radical version of my complaint here, since he applies it even to objective values, and not just subjective ones.)
(I'd rather struggle to achieve some of my philosophical and personal goals than be a satisfied grass-blade counter.)
ReplyDeleteBut you say this now – if your goals were different, wouldn't you be satisfied to have achieved _them_?
Certainly, if you manipulate me to be satisfied with different preferences, then (by definition) that is a scenario in which my counterfactual self would be satisfied. But it doesn't follow that, from my current perspective, I should see anything desirable about that scenario. So if the DS theorist wants to insist that it would "really" be better for me, that is the same sort of 'paternalistic' view of welfare -- i.e. overriding my own best judgment of what's good for me -- that desire theorists reject in objectivism.
ReplyDeleteFor our children we say we want them to be happy, to get whatever it is that they want. For myself at retirement, I don't know what it is that I will want, but I want myself to get it. So there are senses in which we most of all want people to get whatever they want, and are less picky about what that thing is.
ReplyDeleteYeah, within constraints. (Conditional on your son or future self wanting to be a serial killer, you presumably no longer want him to get what he wants.) And do you really know any parents who would accept a mad scientist's offer to turn their child into a super-satisfied grass-blade-counter?
ReplyDeleteThere is some sense in which we want people (others, as well as ourselves) to get what they want. But it is definitely not in the sense of wanting to maximize the number of desire-satisfactions that occur. (Nor even 'happiness', in any narrow sense: who would plug their kid into an experience machine?) We do not look kindly on the prospect of artificially induced desires. Rather, what we want is for the people we care about to do well as assessed against their current goals (suitably idealized, perhaps), and we trust that their future desires will typically be a coherent continuation of these.
Artificially induced desires undermine this assumption; they won't necessarily "make sense" or fit into the broader web of the person's interests and concerns, which may explain why we don't value them in the same way.
I know we've argued a lot over the years, so let me just state that this is one of the crispest statements I've seen of an important point, and say thanks.
ReplyDeleteThis is one of the crisper statements I've seen of a very important point - I wish I'd had it back when I was trying to talk someone out of desirism.
ReplyDelete