Here are a few more thoughts on how best to interpret 'equal concern'. I think this might best be understood as a matter of procedural fairness. We establish a fair process, and accept whatever results it yields. ('Substantive' fairness, by contrast, requires a particular result.)
So what would a fair procedure be? It needs to take everybody's interests into account, and treat them equally, i.e. without favouritism. The Rawlsian 'Veil of Ignorance' springs to mind. Imagine that everyone affected is put behind the VoI, where they no longer know any of their own personal details (e.g. wealth, age, sex, education or talents). They then get to deliberate together and choose the basic structure of society -- a decision that aims at self-interest, but will be untainted by personal bias, since for all they know, they might end up as anybody in that society.
Now, suppose they had two choices:
1) Everyone has mediocre well-being
2) A small proportion of people are moderately poorly off, but everyone else flourishes.
Surely the rational thing to do is choose #2? But if this is so, and if the VoI - as described - really is a fair procedure, then this establishes that utilitarianism, not egalitarianism, is the theory which treats all people with equal concern.
Note that Rawls only manages to get his 'maximin' principle out of the VoI procedure by insisting that the relative sizes of various social groups are hidden behind the veil. In other words, the choices would be:
1a) Everyone has mediocre well-being
2a) There is a group of moderately poorly off people, and a group of flourishing people.
Here it may indeed be rational to choose #1a, simply because one cannot accurately assess the risks involved in #2a. But what justification is there for hiding this information? I think the process is fairer when the relative probabilities are made known. If we do not know how many people are in each group, how can we weigh their interests fairly and appropriately?
If done properly, it seems to me that the VoI becomes equivalent to the 'ideal observer' theory. That is, we imagine an omniscient, benevolent, impartial spectator, and ask them which option would be best. It seems likely that such a procedure would favour utilitarian over egalitarian outcomes. So that's what equal concern requires.
I think the idea of the Rawlsian veil of ignorance is that you are meant to imagine that you yourself would be the person who ended up in the worst off position, whatever that may be, rather than being a disengaged spectator to the process.
ReplyDeleteNo, I'm pretty sure we're not supposed to know which of all the possible people we are going to end up as. (That's certainly how it's always been taught in our political philosophy lectures. And I've skimmed a little of Rawls' Theory of Justice, and that does seem to be what it was saying.)
ReplyDeleteYou're right that we're not "disengaged". But we need to imagine ourselves as possibly being each person (since, after all, we are "ignorant" of which person we will actually turn out to be), and not only the worst off. And it seems to me that this would justify utilitarian rather than maximin results.
Part of the reason why it is behind the veil of ignorance is that it seems like morally irrelevant information that can be used to tailor our distributional schemes. It comes very close to having real information about our position in society, which is verboten.
ReplyDeleteThat is only part of the answer. The other part is that we must be very risk averse in the original position.
There are two important reasons why this is true.
First, these deliberations will eventually dictate our basic life prospects. I think there is something very intuitive that where one's basic life prospects are concerned, getting guaranteed a medium, basic threshold is superior to achieving marginal additional income at the risk of having a terrible life.
Second, we are not supposed to know what our risk-taking propensities will be. That is behind the veil of ignorance. Given that, the first consideration is decisive.
The difference between one person suffering vs. a billion people suffering sounds pretty morally relevant to me! And, as I say in the main post, "If we do not know how many people are in each group, how can we weigh their interests fairly and appropriately?"
ReplyDelete"getting guaranteed a medium, basic threshold is superior to achieving marginal additional income at the risk of having a terrible life."
If that is so, then informed deliberators would take that into account when making their decision. There is no need to hide the group-size info unless you want to fix the outcome in advance, by forcing people to be risk-averse against all rationality.
(Do you seriously believe that option #1 described in the main post is rationally preferable to option #2? Would you choose it? For an even more blatant counterexample, see "the Worst-Off-Virgin Sacrifice Principle" from Scottish Nous.)
"we are not supposed to know what our risk-taking propensities will be."
So the answer is to force us to not take any risks whatsoever? That's rather extreme. Not to mention self-defeating!