An agent may ask, “What motivational profile would it be best or most desirable, from a moral perspective, for me to have?” I take this to be the question implicitly driving the motive utilitarian (or the global consequentialist thinking about motives). This is to ask about the morally recommended or fortunate motivational profile. By contrast, the question [of moral worth] is importantly different—something more along the lines of, “What motivational profile is most morally fitting or apt, reflecting an orientation toward the good, and is on this basis perhaps worthy of praise or high esteem?”
Note that there is no reason to expect the same answer to both questions, as an internal orientation towards the good may have bad extrinsic effects. For example, we may imagine that an evil demon threatens to destroy the world unless you acquire (and subsequently maintain) the very same vicious, morally contemptible motivations that drive the demon himself. He offers a magic pill that will induce this effect in you. As a good person, you care more about the world than about the purity of your own moral character, and so—quite virtuously—opt to take the pill and become vicious. Your subsequent motivational profile is, by design, morally contemptible. (We may suppose that you come to intrinsically desire to corrupt the virtuous, cause innocents to suffer, etc.) Nonetheless, it is highly morally fortunate or desirable—though you no longer care that this is so—because your newfound viciousness is causally responsible for saving the world from the evil demon’s threat. So the moral aptness of one’s motivations cannot be identified with their desirability or usefulness from a moral point of view.
Sharadin later suggests that it is a "cost" of the right-making account of moral worth that "we might, even if only in rare circumstances, be obligated to act unworthily when there’s an alternative account of worth available [...] that does not entail this." But why is this a cost at all? On the contrary, it is an incontrovertible datum, as demonstrated by my case above, that we might (in a bizarre hypothetical case) be morally obligated to make ourselves vicious, or such that our future actions will lack moral worth. It is, to be sure, a morally unfortunate circumstance that we are imagining here; but all sorts of morally unfortunate circumstances are conceptually possible, and it'd be a failure of our conceptual scheme were it incapable of accurately describing these scenarios.
Moreover, insofar as we're interested in the commonsense notion of "acting for the right reasons", it should be clear that this corresponds to having fitting rather than merely fortunate motivations. After all, external incentives might make any motivations whatsoever fortunate (or consequentialist-recommended) in special circumstances. But commonsense does not allow any motivations whatsoever to be virtuous, praiseworthy, or what have you. (Again, just take the example of a desire to cause innocent people gratuitous suffering.)
So it's a non-starter to try to analyse "right reasons" in terms of consequentialist-recommended motivations. Even consequentialists should acknowledge this: sometimes, in weird circumstances, we should try to acquire vicious motivations. The mere fact that we ought to acquire them doesn't make them virtuous. (But that's okay, because there are more important things than virtue!)
You may wonder: what, then, is the right account of moral worth for consequentialists? My heterodox answer -- available here -- is that consequentialism turns out to be perfectly compatible with a right-makers account of right reasons, once we take care to properly identify the theory's right-makers.
0 comments:
Post a Comment
Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)
Note: only a member of this blog may post a comment.