First, Hanson proposes, Morality Should Exist -- which sounds like a category error, but what he really means is that "there should exist creatures who know what is moral, and who act on that." But this is much less intuitive as a substantive principle than the previous intuition that happy people should exist. And as a formal constraint, the proposal that any (morally eligible) utility function must assign utility to agents using this very utility function seems baseless. After all, it seems perfectly coherent to care only about the welfare of sentient beings, and not at all about whether their welfare is achieved by means of explicit attempts to promote welfare. If Hanson is looking for an uncontroversial principle to rest his case on, gratuitously ruling out traditional consequentialism like this seems like a bad place to start.
Next, Hanson proposed that Morality should be adaptive:
[M]orality evolved to help us survive [... So if] we apply that morality in such a way as to make ourselves go extinct, that seems a rather dysfunctional broken application of such morality!
But this conflates moral belief with truth, as well as evolutionary with normative goals. The fundamental moral facts, if there are any, did not evolve: like other abstract truths (e.g. mathematics), they just are. Perhaps our moral beliefs/dispositions were shaped in part by evolutionary selective pressures. But even if the evolutionary "purpose" of our moral beliefs (like everything else) is to help us survive and propagate our genes, that doesn't make it a "purpose" we must share. Normatively speaking, belief aims at truth, so the purpose of our moral beliefs is to accurately represent whatever the moral truths are. And whether it's good for us to survive is a substantive normative question -- albeit one that's plausibly settled by whether our lives tend to be good for us on net.
Hanson concludes with a challenge:
The evolutionary context of our moral intuitions gives a rich detailed framework for defining and estimating moral error. If you reject that framework, the question is what other framework will you substitute? How do you otherwise define and estimate the error in your specific moral intuitions?
As always in philosophy, the only way to proceed is by means of reflective equilibrium: starting with what we judge likely to be true, and seeing how these judgments cohere with other (specific and general) claims that strike us as plausibly true, resolving any conflicts in whatever way strikes us as most plausible.
Hanson's proposal is but a particular instance of this, where you start with overwhelming confidence that moral goals should coincide with evolutionary goals. But I have no such confidence in that assumption. I find it more plausible to start with such substantive claims as that happy, flourishing lives are good, and misery is bad. Hanson's hope for a purely formal moral framework offers but a wild goose chase, ending in the smuggling of (less plausible) substantive moral assumptions through the back door.
In seeking an error framework, I had in mind something that could predict the types of errors one might see, and the rate at which they would occur. That would give you a basis for a systematic analysis of which specific intuitions are in error. Your suggestion to resolve conflicts in whatever way one strikes you as plausible offers no such basis, and can't even notice errors that don't lead to direct conflicts between intuitions.
ReplyDeleteIf one of your general judgments is that there should be some relatively simple unifying principle subsuming the more particular judgments, then your best effort to develop such a systematic moral theory may serve as a new basis for predicting errors in your previous intuitions. My point is just that you can't make such predictions ex nihilo -- they will be based in some prior assumptions. I was drawing attention to yours, and asking whether they are really the moral principles about which you are, on reflection, most confident?
DeleteIt seems to me that we need a theory of the causal processes that produce moral errors in order to systematically estimate moral truth from our sets of error-prone moral intuitions. Evolutionary theory gives a coherent causal framework to predict the distribution of errors.
DeleteEvolutionary theory might help to predict what people's moral beliefs are (though less reliably than simple observation, I would expect!). How exactly do you get to conclusions about which particular such beliefs are "errors", without a prior theory of what the moral truths are? (Cf. my response to Sharon Street.)
DeleteYes of course, but that theory of moral truth won't get you very far without some other theories about the causal processes that produce errors.
ReplyDeleteRichard,
ReplyDeleteWonder if you would find Harrison's prima facia duty-based anti-natalism more deserving of being taken seriously.
Not really. His core argument: "[W]hen it comes to creating a new life, even a very happy one, there is a prima facie duty to prevent the suffering it contains but no prima facie duty to create the pleasures. And so the prima facie duty to prevent the suffering is unopposed and thus decisive."
DeleteI think this just goes to show that he has misunderstood the prima facie duty to prevent suffering. It is better understood as a prima facie duty to prevent uncompensated suffering. This avoids the ridiculous implication that we wrong people by bringing them into a very happy existence.