[I was just looking up this old exam answer I wrote for a class last year on 'subjective oughts', and I realized that I hadn't yet posted it to the blog. So, here goes -- lightly edited, but still lacking footnotes, etc.]
What is the distinction between what a person “subjectively ought” to do and what a person “objectively ought” to do? What is the best analysis of each notion? What objections are there, if any, to these best analyses? (You might discuss more than one attempt to analyze them.) Is the notion of the “subjective ‘ought’” a useful notion? If so, how? If not, why has it been thought to be useful, and what is wrong with that thought?
Suppose that a trustworthy demon threatens to kill ten innocent prisoners unless our protagonist, Sally, wins their freedom. He presents her with three buttons, and explains that the rules are as follows: Exactly one of the first two buttons (A or B) is fixed to set all ten prisoners free, while the other won't free any. Alternatively, she can play it safe by pressing button C, which is guaranteed to save nine of the ten prisoners. What should she do? [This is a variation on Parfit's “mineshaft” case, or Jackson's case of the three medical treatments.] Suppose that, though Sally does not know this, button A is in fact the one that will save all ten. It then seems that we can identify an 'objective' sense in which she 'ought' to press button A. That would be the best decision, in light of the actual facts of the situation. On the other hand, there's a straightforward (if more 'subjective') sense in which she 'ought' to play it safe and press button C. That'd be the most wise or rational decision, given the information available to Sally at the time of her decision. We can thus intuitively grasp a distinction between 'objective' and 'subjective' oughts, though it remains to be seen how we might precisely explicate or analyze these notions.
As a first pass, we might try to analyze what S objectively ought to do as what S would do if she were fully informed and morally perfect. But this will run into familiar difficulties (cf. Shope's “conditional fallacy”), since it might be that S ought to gather more information, or try to become more virtuous, whereas her idealized self S+ would, in virtue of her ideal condition, have no reason to do these things. We might get around this problem by asking instead what the idealised S+ would want or advise S to do, given her actual (non-ideal) condition. I find this a helpful enough explication, but as an analysis it seems to get things backwards: presumably, S+ would advise S to ϕ precisely because ϕ-ing is what S objective ought (or has most objective reason) to do. S's reasons for action ground the reasons for giving S this advice, rather than vice versa. So we may do best to simply take as primitive the notion of (objective, 'fact-based') reasons, and analyze the objective ought as a matter of what you have most reason to do.
The 'subjective ought' is slightly more elusive. Indeed, I think we need to further distinguish several more or less subjective concepts in this vicinity. On the one hand, we have the strict notion of what action is warranted, or perfectly rational, given the evidence about your empirical situation. We can call this the 'rational ought'. At the other extreme, we have the much watered-down notion of what you ought to do according to your (accessible) beliefs (however unreasonable they may be) – call this the 'belief-relative ought'. One might also identify various in-between notions. For example, if we think that agents can have justified but false non-empirical (e.g. logical or normative) beliefs, this creates room for a 'justified-belief-relative ought' that is weaker than what I have called the rational ought.
These different notions may be useful for different purposes. One traditional role for the 'subjective ought' is to enable us to evaluate the internal quality of an agent's decision. Return to our example of Sally and the demon. Though it would be fortunate if Sally chose to press button A, since this would actually save the most lives, such an irrational gamble would reflect poorly on Sally herself. Playing it safe with option C is a much wiser decision for her to make, and one that thus warrants a kind of positive evaluation on our part. I take it that the relatively strict notion of rationality is what we want for purposes of evaluating the internal quality of an agent's reasoning and decision-making. If Sally does what she rationally ought to do, this is evidence that she is a competent and reliable agent, who can be trusted to make wise decisions – and avoid disasters – in other cases too. (Consider: who would you prefer the demon to ask next time it plays this game: a gambler or someone who will play it safe?) The belief-relative notion certainly won't do here, because insane beliefs might render an agent deeply morally incompetent and unreliable, or even reliably bad. (Consider an agent who is certain that she has most reason to try to cause as much death and destruction as possible.)
Can we analyze rationality in other terms? I expect not. It is widely appreciated that we can't simply define rationality as a matter of doing what we likely have most (objective) reason to do, since in cases like Sally's, we judge that it is rational to do the one thing that the agent knows isn't objectively best. The shift to maximizing “expected utility” gets better results, and may well accurately capture what is – as a matter of normative fact – rationally required (though we'll later consider whether such an account fails to provide adequate “guidance” to agents); but if so, this is more plausibly understood as a substantive claim than as any kind of reductive definition. As in the case of objective reasons, I think we should be happy enough to just take the concept of rationality as primitive.
A second important role for the 'subjective ought' is to track an agent's praise- or blame-worthiness. Sally seems praiseworthy for choosing option C, and would be blameworthy if she had instead gambled (even on A, the objectively best option). It's controversial whether this diverges from the previous role at all. I find it tempting to closely link blameworthiness with rationally unwarranted decisions. But one might think that agents can be reasonably mistaken about what rationality requires of them, and in such cases cannot reasonably be blamed for their failure. If that's right, then the 'justified-belief-relative ought' might better fill this role.
A third – and very different – role for the 'subjective ought' is to offer first-personal guidance to agents, delineating (as Holly Smith puts it) “a type of duty to which the agent has infallible access in his decision-making,” however confused or misguided he may be. This will clearly require a much more subjective 'ought' than the previously roles called for – perhaps something like what I earlier labelled the 'belief-relative ought' (noting, again, that we're only talking about accessible beliefs here). Indeed, I think it risks becoming so subjective as to no longer carry any normative weight.
Smith invites us to consider the case of Allison, who – despite overwhelming evidence – “cannot bring herself to consciously face the fact” that her daughter has a learning disability, though she knows this “in her heart of hearts”. Allison is offered testing that would help her learning-disabled daughter (but cause unnecessary teasing for an able child), and declines. Smith wants to say that Allison does the “subjectively right” thing, since she lacks access to her belief that her daughter has a learning disability, and so can't tell that she ought to accept the offer. Even so, I'm inclined to think that it is awful of Allison to decline as she does. At least on one natural way of fleshing out her underlying psychology, she is letting her own subconscious fears and hangups get in the way of doing what is best for her daughter. This seems blameworthy as well as unfortunate – the wrong decision whichever way you look at it. The decision manifests a poor quality of will (or bad underlying concerns on the part of the agent), and this is so even though the agent herself may fail to realize this. Quite apart from the external consequences, the agent's internal decision-making “malfunctioned”, or was morally defective, all things considered.
We might understand the most-subjective 'ought' as not denying this, but instead as merely claiming that there was some local well-functioning in the agent. This is like the way in which we might claim that someone who wants to bring about global catastrophe should try to trigger a nuclear war. Even someone with that malicious desire shouldn't really try to trigger a nuclear war. No-one should do that! The apparent claim to the contrary may instead be taken to indicate a wide-scope local consistency requirement: consistently desiring an end requires desiring the necessary means, so you shouldn't have the former without the latter. That's not to say that you should have the latter: maybe you should have neither! Still, if you desire both the end and the means, well, at least we can say one good thing about you: these two states, considered in isolation, are in compliance with this one rational requirement, considered in isolation. But of course that doesn't mean that you've done well all things considered. It may be that (other requirements establish that) desiring the end is completely crazy, in which case, again, you should give it up rather than desire the means. Desiring the means surely compounds your error, according to any properly 'global', all-things-considered normative evaluation. Likewise for doing what you 'ought', in the (accessible) belief-relative sense, if your (accessible) beliefs aren't what they ought to be.
So I think that this most-subjective sense of 'ought' is unhelpful. It sounds like an all-things-considered recommendation, when really the most that is justified here is a far weaker evaluation of local consistency. You might wonder: why would anyone think there is more to it? I think the main motivation derives from an exaggerated understanding of the sense in which morality aims to be “action-guiding”.
Even when considering the project of creating an 'instruction manual' that agents may use as a guide in making their way about the world, I think we should be satisfied by norms that will serve to guide a sufficiently competent and well-functioning agent in the right direction. Others have hoped to find moral norms that even the most incompetent agent can apply without difficulty. The problem with this hope is that incompetent agents will, being incompetent, inevitably end up doing rather poorly on many occasions. So if we want to tailor our instruction manual to cater to their limited abilities, we are going to end up instructing agents to act poorly. Any instruction manual that tells Allison to decline the test, for example, is offering bad advice. I can't see why we would want to do this. It may be true that, due to her rational failures, Allison won't be able to make use of any better advice. But that's no excuse to give her bad advice.
So we shouldn't feel compelled to write an instruction manual that's easy for anyone (however incompetent) to follow, because the resulting advice would no longer have any value. At the other extreme, I certainly acknowledge that there's little use for instructions that appeal to unknowable conditions (e.g. the Moore-paradoxical: “what to do if you believe that p but p is false at the time of reading this instruction...”). Still, there may be plenty of value to be found in an instruction manual that appeals to evidential conditions (e.g. “what to do if your evidence supports 0.2 - 0.4 credence in p”), since we are often able to judge our evidence correctly. Granted, if we are so ill-constituted as to be unable to respond appropriately to evidence, then we will be equally unable to be guided by these normative (rational) requirements. That'd certainly be unfortunate, but I think any complaint about this situation is more fairly directed at the world (for containing ill-constituted agents in the first place) than at our normative theory (for failing to achieve the impossible with them).
A final note: I've been implicitly assuming that the 'instruction manual' project is to be understood as inquiry into what advice an objectively well-constructed instruction manual would give us. (So, for example, the answers might be determined by the sort of 'Long Run principle' discussed here.) One might instead adopt a more 'Cartesian' project of showing how agents can build their own instruction manuals from scratch. Such a project seems likely to have much more 'subjective' results, being dependent on the agent's normative and logical starting beliefs, amongst other things. But again, I am suspicious about whether such extreme subjectivity is compatible with genuine normativity. Compare the analogous epistemological project: an insane agent might try to build up an epistemic system that began by rejecting modus tollens in favour of affirming the consequent, or that utilized “counterinduction” in place of induction. Suppose the counterinductivist believes that the sun has always risen in the past; is there any sense in which he really 'ought' to believe that the sun won't rise tomorrow? I would think not. Sure, it's supported by his first principles, but his first principles are insane.
I take the upshot of this to be that real normativity is inevitably laced with objectivity, in the sense that merely trying hard to be reasonable is no guarantee of success. Even in case of the relatively subjective 'ought' of rationality, an element of luck (beyond our control) is involved: we need to be well-constituted as rational agents to begin with. Otherwise: garbage in, garbage out.
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment
Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)
Note: only a member of this blog may post a comment.