Nefsky wants a case where the morally significant features of a situation are vague, or have fuzzy boundaries. It could then be that no individual increment makes a difference as to what "side of the boundary" the outcome falls under, and yet that many such increments collectively do make a morally significant difference.
But it seems overwhelmingly plausible to me, as a pre-theoretic datum, that whatever is of fundamental moral significance cannot admit of vagueness. After all, (i) there is plausibly no "ontic vagueness": the world itself is precise/determinate in all fundamental respects; vagueness merely enters (as a semantic matter) into our high-level descriptions -- whether we call a thus-sized collection of grains a "heap", or whether we call a person with a certain number of hairs on their head "bald", etc. It's not as though there's some objective property of baldness out there in the world that we're trying to latch on to. (ii) The things that matter are features of the world, not of our vague descriptions. So (iii) the things that matter don't admit of vagueness.
One thing that struck me, both in reading Nefsky's paper, and in some related discussion at the recent RoME conference, is how readily people attribute "vagueness" to a case which could just (or even more) plausibly be accounted for in terms of graded harms. Consider, for example, Nefsky's discussion of overfishing (p.377):
if every angler in the village takes one more fish than she is allotted, this will result in the fish population losing its ability to replenish itself; but, for each angler, taking a single fish more is not itself enough to make a difference. Kagan suggests that this is a triggering case. But... this might be a nontriggering case. What would it take for it to be a nontriggering case? Instead of the removal of some particular fish triggering the problem, it could be that there is no precise triggering point—no sharp boundary between the population having a healthy ability to replenish and not having it.
It may be that our categorization of populations as "having a healthy ability to replenish" is vague. But it's not our categorizations that matter here. It's the actual outcomes for the fish population. And insofar as our categorization is vague, this is presumably just because the underlying scale of population-health is graded, rather than fundamentally vague. When we reach the "fuzzy boundaries" of our categorization, each fish (or few fish, if we have some small-scale triggering going on) removed makes things a bit worse for the population's ability to replenish itself. There's no vagueness in the actual propensity of the fish population to replenish itself (what would that even mean -- are some fish going to have an indeterminate number of offspring?!) -- all that's vague is the point along the scale at which things have gotten so bad that we no longer call it "healthy".
Later on Nefsky points out that there can be "phenomenal sorites series" for terms like "looks red" or "sounds loud". But again, these are just high-level descriptions that admit of fuzzy boundaries, not cases where the phenomenology itself is fundamentally vague. When we start thinking about phenomenal properties that actually matter: e.g. how painful it feels to be in a certain state, this again seems to merely be a graded (rather than vague) scale, at the fundamental (and fundamentally significant) level.
I was also puzzled by Nefsky's discussion of (what we might call) "external" triggers (p.391):
Imagine that you are working with this machine that registers charges only in whole kilovolts, increasing the current applied to it nanovolt by nanovolt. Eventually the current will be within the margin of error of a kilovolt. So, the machine could change from registering 0 kV to registering 1 kV at any moment. But, given that you are within the margin of error of 1 kV for that device, it would be a mistake to think that, at the moment when it actually does register 1 kV, this is due to the last minuscule increase in voltage that you made. It is due to the fact that many increases were made, such that the current is in some rough, very close vicinity of 1 kV. If the current had not been within the margin of error, the machine would not have registered it as 1 kV. But that it registered 1 kV at the precise point in your adding nanovolts that it did is most likely due to mechanical or environmental factors and not to the addition of some single nanovolt. This means, I think, that we cannot say that had you not added that last nanovolt, the machine would not have registered 1 kV.
Unless I'm missing something, this seems to confuse temporal and counterfactual criteria for triggering. I agree that cases like this show that temporal criteria are no good: the latest-in-time increment may not have been essential, if a previous setting (from time t-1, say) together with "external" fluctuations at time t jointly suffice to bring about the change in outcome. But holding fixed the precise details of the fluctuations that occur at t, there must be some minimum voltage level n such that the change will occur if the voltage level is at n nV, but would not have occurred with merely n-1 nV. So while it's true that n need not be "that last nanovolt" added, it's still the case that some (perhaps earlier) individual increment n was in fact counterfactually responsible (given the later environmental fluctuations) for the triggering.
And the same will then be true in the "Harmless Torturers" case that Nefsky goes on to discuss (p.393). Even if it's true that some "other factor would trigger this change for the worse were that [last] increase in voltage not to occur", that just goes to show that it was some previous increment in voltage that was counterfactually responsible for triggering the increase in pain in this case.
So I remain unconvinced that (traditional, "individualistic") consequentialism has any problem accommodating these sorts of cases. What do you think?
I tend to agree with your view on graded harm, and not view sorites-type situations as a challenge to consequentialism.
ReplyDeleteI'm not if the following is a different way to think about these situations, or is a rephrasing of the graded harms account, or if it makes any sense, but bear with me.
In the overfishing scenario, we could say that each additional fish killed lowers the chances that the population will sustain itself. So perhaps there are obligations not only to avoid actions with bad consequences, but also to avoid actions that increase the odds of bad consequences?
A minor nitpick: volts (V) and kilovolts (kV) are units of potential, not current. The latter is measured in amperes (A).
Hi Pietro, right, that would be a "triggering" version of the case (so, different from Nefsky's intended interpretation here). I agree that plain old expected utility ("avoid actions that increase the odds of bad consequences") can accommodate that nicely.
DeleteWhat about Nefsky's fairness example? Do you think that it is unfair if the millions of gallons of water to which A and B both have equal claim is divided so that A and B gets approximately equal amounts but A ends up with one extra drop (or molecule) of water? I don't see why we should just assume that "whatever is of fundamental moral significance cannot admit of vagueness"? Perhaps, fairness (or rough equality) is of fundamental moral significance even though it is not a precise matter. And why can't rough equality be a feature of the world? Is it not a feature of the world that A and B have roughly equal portions of water?
ReplyDeleteHi Doug, I confess I'm generally pretty skeptical of "fairness" as having fundamental normative significance! But if I try to play along, then yes, that definitely seems a more compelling sort of case.
DeleteHowever, it still seems very open to a graded interpretation: maybe perfect equality is most fair, and every drop of deviation from that makes things very slightly less than perfectly fair. Must we think that there is a binary distinction to be drawn further down, between "fair enough" (or rough equality) and "not fair enough" (i.e. worth getting upset about)? I'm not sure -- maybe the believer in fairness should feel very slightly more resentful about each drop less than the average that they receive. Though even if we are forced to accept a binary distinction here, I still don't think we're forced to accept any kind of metaphysical vagueness: a kind of "epistemicism" (i.e., there is a sharp line, we just can't know exactly where) seems plausible about substantive moral properties in a way that it doesn't for descriptive predicates like 'bald'.
"Is it not a feature of the world that A and B have roughly equal portions of water?"
Well, it's not a fundamental feature of the world, I take it. So the question is whether these higher-level descriptive facts can be (irreducibly) significant. It seems a little odd to me, to think that they would be -- too subjective, in a sense, since any (apparently) vague predicate like "roughly equal" seems more a contingent human projection than a genuine natural kind that carves the world at its joints. I feel like the stuff that really matters should be more objective than that.
Of course, I don't pretend to have given any sort of knock-down argument here; just a reasonably plausible (I hope) "internal defense" that's available to traditional consequentialists.
It seems to me that perfect equality (having exactly the same number of molecules of H2O) doesn't matter in the least and that it is only rough equality that matters.
DeleteRegarding the feature of the world stuff, are you assuming that the natural world is all there is? I'm not. I'm imagining that there can be irreducibly non-natural ethical properties and that perhaps these properties could be vague. In this case, there would be a kind of metaphysical vagueness without there being any vagueness in the fundamental features of the natural world.
(I'm on board with metaethical non-naturalism. I was just thinking about the subvening non-normative "base" upon which the normative facts supervene. E.g. if "fairness" is metaphysically vague, that's presumably because its non-normative ground is the vague "rough equality" rather than any genuinely fundamental non-normative property, like the precise distribution of particles. The latter would more naturally lead to a merely epistemic account of apparent moral vagueness.)
DeleteBut in order to accept that non-natural ethical properties supervene on the natural properties, we don't have to hold that for every difference in the natural properties, there must be some difference in the ethical properties. So why think that if "fairness" is metaphysically vague, that would have to be because there is some vagueness in the natural properties on which it supervenes? Perhaps, these problems arise precisely because the ethical properties are vague but the underlying base natural properties on which they supervene are not. This is why you can have more C-fibers firing even though you do not suffer more.
DeleteI need to get back to other work. So I'll take your responses "off the air." Thanks, though, for an interesting discussion. I need to think more about this stuff.
Another thought:
ReplyDeleteYou say, "vagueness merely enters (as a semantic matter) into our high-level descriptions -- whether we call a thus-sized collection of grains a "heap", or whether we call a person with a certain number of hairs on their head "bald", etc. It's not as though there's some objective property of baldness out there in the world that we're trying to latch on to. (ii) The things that matter are features of the world, not of our vague descriptions. So (iii) the things that matter don't admit of vagueness."
What matters, I think, is not how many hairs one has on one's head but whether one is noticeably more balding. And there may be not precise boundary between being noticeably more balding and being unnoticeably more balding. And we might say the same thing about pain. What matters is whether we are noticeably in more pain.
Hi Doug, by "noticeable" do you mean "discriminable in a pair-wise comparison", or just "makes a phenomenological difference to how much it hurts (say) to be in that state"? It seems to me that the latter is clearly what matters, but that only the former admits of vagueness. (Cf. my old post on the puzzle of the self-torturer.)
DeleteI meant the former. And it's not clear to me, as it is to you, that it's the latter that matters. I agree that there has to be a phenomenological differences in at least some of the pairs, but I don't think that we must be able to discern between these phenomenological differences. Suppose that we each look at two distinct "spot the differences" pictures. We will, I assume, have had different phenomenological experiences. But if we can't tell the difference between the two pictures or the way that they make us feel when we look at them, then it's not clear to me that this phenomenological difference matters.
ReplyDeleteReally? Suppose I'm just really bad at introspection, working memory, or whatever other cognitive processes underlie pair-wise comparison, so even a pretty significant increase in pain will not alter my reports. Still, by hypothesis, this is not merely a difference in sub-conscious neural processing. I'm really feeling more pain in the second case than in the first. Given that we're not behaviourists, why would discriminatory abilities, rather than the actual phenomenal feel, be what matters?
DeleteGood point. My thought, which may be misguided, is that there could be phenomenological differences but that the particular phenomenological differences may not matter. For instance, it may be that I feel more pain but I suffer no more as a result, because I don't mind more pain in this instance. I gather, though, you will insist that there must be phenomenological differences in how much one suffers in a self-torturer case. But isn't this because you assume that whatever phenomenological difference is the one that matters must be one that has a non-vague boundary between more and less of that?
DeleteAnd in the case of noticeably more balding, what matters is whether you or others perceive you to be less handsome as a result of your having fewer hairs on your head. No one cares about what exact number of hairs other people see. They care about whether they are perceived as more or less handsome. But I suppose that you'll insist that whatever it is that matters (in this cases people's perception of you) can't be vague. Or you'll insist that 'more handsome' is "more a contingent human projection than a genuine natural kind that carves the world at its joints" and is thus can't be what matters. But why should we just assume that what morally matters has to be something that carves the natural world at its joints? Perhaps, the projections are sometimes indeed what matters?
DeleteOh, I agree that others' judgments of us can often matter, but they're surely graded (with occasional triggering, if not every increment is discernible) rather than vague.
Delete(So I'm thinking that others' attitudes, etc., are perfectly "objective" / real constituents of the world, in a way that the putative properties of "rough equality", "baldness", etc., are not.)