What we can do rationally is what we can do by means of a reasoning procedure, and our reasoning procedures can only operate on our actual representations. Only when a potential action is (at least minimally) rationalized by their beliefs can agents perform it rationally. If I do not represent the state of affairs at which an action is aimed as desirable, or as furthering the satisfaction of my goals or ends, then I cannot settle on that action by means of a reasoning procedure; if I perform that action, I will do so only as a consequence of a failure of my reasoning.
But this neglects the obvious possibility that the agent could revise their unjustified beliefs and desires. Ex hypothesi, Potter's false normative beliefs are unreasonable. So there is available evidence that could - and rationally should - move Potter to form more reasonable normative beliefs instead. These in turn should rationally move Potter to act less selfishly. Where in this picture is the alleged "failure of... reasoning"? Potter is now reasoning (and responding to evidence) better than ever before!
Levy seems to be making the mistake of confusing internalism with subjectivism. It's true that rational assessment must be 'internal' in a sense: it depends on what evidence and information the agent possesses, rather than just on what's objective true. But, as my linked post explains, that's a far cry from saying that an agent is rational whenever they believe themselves to be so (or that such self-satisfaction entails that they rationally shouldn't revise their attitudes).
See also: 'Rational Akrasia' and On Rosen's Moral Responsibility Skepticism.
Any response to Levy is going to depend on some account of what we ought to be held morally accountable for, but it seems a plausible reply (in addition to yours above) might be the following: We are accountable not for our actions, per se, but for our character (as exemplefied by certain actions). Character, though, just consists in our beliefs and attitudes toward other beings (and, more complexly, relevant states of affairs). For instance, in the case of torture, one is not blameworthy for the specific actions performed (say, cutting a person), since those actions, under other circumstances (such as surgery) are permissible. Rather, one is held accountable for their character: their dispositions, and beliefs about other people (e.g. that they are good for torturing).On this account, a sadist certainly has a rationale behind their actions, but that doesn't get them off the hook. In fact, it's precisely this rationale, not the actions themselves, that the person is held accountable for.It also seems to me that Levy's account might get everyone off the hook, and at least gets the most terrible people off the hook. One might intentionally fudge their own ethics in cheating on their taxes, but the most heinous acts are almost always committed under some justifying rationale (say, that Homosexuals are wicked in the eyes of God and deserve death)
ReplyDeleteRichard Chappell wrote: “There is available evidence that could – and rationally should – move Potter to form more reasonable normative beliefs instead.”
ReplyDeleteWhat evidence is that, I wonder. After all, it looks eminently rational to act for one’s own gain, and it looks distinctly irrational to act against one’s own gain.
Further to say that Potter *should* form more “reasonable” normative beliefs – is itself a normative belief. Why should Potter accept it, if it goes against his interests? It seems that the attempt to ground ethics on rationality is a circular attempt of bringing in normative axioms in disguise.
Michael - yeah, I'm sympathetic to a 'quality of will' account, according to which people are blameworthy for willing what is intrinsically bad (even if they think it's good). But it's worth stressing that my response to Levy doesn't rest on any such controversial assumptions. Even if you think that bad motives can be excused, say if the person was raised in such a misleading environment that their false normative beliefs were epistemically rational, that doesn't extend to this kind of case. Levy wants to argue that even epistemically irrational normative beliefs excuse bad behaviour -- but (I contend) his argument for this radical conclusion just doesn't work. It's worth making this negative point independently of any more contentious positive theory we might want to offer. (But I very much agree with you that it's worth developing an alternative positive view -- and that something along the lines you describe seems very plausible.)
ReplyDeleteDianelos - You've confused the dialectic. I'm not arguing that egoism is unreasonable. That's stipulated to simplify the case. (If you don't like it, you can change the details -- replace egoism with something else -- it isn't really relevant to the issue at hand.)
Richard,
ReplyDeleteIt's not a consequence of the view that culpability is a
belief-relative matter that, as you put it, "an agent is rational
whenever they believe themselves to be so." (FWIW, I think that
there's a sense in which this latter claim is true. If B1 is a belief
about what it's locally rational to do relative to B2 and B3, then
what it's locally rational to do relative to all 3 beliefs may be
different from what it's locally rational to do relative to B2 and B3,
sans B1.)
Also, I was hoping you could say something about what it is to "have"
evidence. My worry is that unless this "having" relation is narrowly
psychological, it will be partly a matter of luck whether you act on
the evidence you "have". Also, I don't see why, unless the relation is narrowly psychological, "rational assessment" should count as "internal". It may help to say what you take evidence regarding normative matters to be.
Also, randomly, what do you think about the possibility of INculpatory
ignorance -- of normative or non-normative matters? Suppose I have the wrong -- hell, the obviously wrong -- moral views, and act in such a way that, were those views right, would be horribly wrong. What's the
verdict?
Finally, I think the accusation of "confusing internalism with
subjectivism" is misplaced. It implies that there are two distinct positions that are being conflated. But this is not what's going on. You just happen to think that culpability is evidence- rather than belief-relative, and Levy, from what I can see, disagrees.
Oh, okay, one more thing: Levy is right that when you believe you ought to do A, it's rational to do A. But does he also allow that you can reason your way to B by revising your beliefs about what you ought to do in light of other beliefs (normative and non-normative) emotions, appearances, etc.? This would seem to make his view more palatable.
-Andrew
Hi Andrew, I'm not following you here. I think that to "have" evidence is to believe it (isn't that "narrowly psychological"?). I don't know what you mean by "belief-relative" in this context, since my internalism and Levy's subjectivism are both "belief-relative" in different senses (as explained in my linked post). Do you deny that these are "two distinct positions"?
ReplyDelete"Levy is right that when you believe you ought to do A, it's rational to do A."
Eh? I thought it was uncontroversial that those narrow-scope conditionals are false. (If I'm irrational in believing that I ought to phi, then phi-ing is not the most rational thing for me to do. What rationality requires in this case is for me to revise my antecedent belief, not to follow through on it!) Are you disputing this?
"But does he also allow that you can reason your way to B by revising your beliefs about what you ought to do in light of other beliefs (normative and non-normative) emotions, appearances, etc.? This would seem to make his view more palatable."
That would make his view my view! So I would certainly like him to allow this. But his whole argument requires denying it. Again, his argument against blaming Potter is that Potter couldn't rationally act in the way we're demanding. Here's a key quote: "It is not reasonable to blame agents for actions they cannot (intentionally) omit by way of some reasoning procedure; we cannot hold them responsible for failing to do things they could do only by chance or through a glitch in their agency or what have you."
If Potter can revise his normative beliefs in the obvious ways you've just described, then he could (as my post explains) rationally refrain from performing the wrong action -- no 'glitches' required. This defeats Levy's argument against blaming Potter. No?
Hi Richard,
ReplyDeleteHypotheses sometimes matter. I think that as a matter of actual fact we are all like Mr. Potter, the difference only being that we do feel pangs of consciousness and therefore take into account the respective discomfort when we try to maximize our gain. Now if one assumes, even only hypothetically, that this universal ethical state of mind is false and unreasonable then it makes no sense to reason about ethics. By the very act of assuming this hypothesis we make reasoning about it incoherent.
Anyway, I take your point, and introduce Mr. Dumby who falsely and unreasonably believes that it’s morally required to kill as many living things as possible including ultimately himself. Is Dumby blameworthy?
Well, questions also sometimes matter. This question entails that there is such a property as blameworthiness, and moreover that it is reasonable to assign blame. Arguably (e.g. on Christian ethics) the latter is false. But if it is unreasonable to assign blame then it is also unreasonable to think about how blame is to be assigned. But let’s overlook these points.
If I understand you correctly you’d say that Dumby is blameworthy, because there is evidence against his false and unreasonable normative beliefs, evidence which he, being a rational being, should have considered. If so, my question remains: What evidence is that? What evidence would you suggest Dumby should have considered before forming his normative beliefs?
Eh? I thought it was uncontroversial that those narrow-scope conditionals are false. (If I'm irrational in believing that I ought to phi, then phi-ing is not the most rational thing for me to do. What rationality requires in this case is for me to revise my antecedent belief, not to follow through on it!)
ReplyDeleteIt's definitely controversial whether those narrow-scope requirements are true (I take it Kolodny was at least at one time sympathetic to that reading. Mark Schroeder explicitly defends the narrow-scope version in 'The Scope of Instrumental Reason' and 'Means-End Coherence, Stringency, and Subjective Reasons.').
In any case, your objection doesn't gain much traction. The narrow-scoper can say that sometimes what you're rationally required to do is exit the requirement--i.e. give up your belief. I take it in your imagined case it's stipulated that one is rationally required to give up the belief. Notice that by giving up the belief you don't violate the narrow-scope requirement. And so there's no real problem in thinking that sometimes what's rationally required is exiting the requirement.
[Andrew emailed the following comment:]
ReplyDeleteHi Richard,
Re: the narrow scope requirement. I think such a requirement is true. I found Kolodny's arguments for the narrow-scope requirement in his 2005 and 2007 very convincing. So I don't know what you mean by "uncontroversial" here. If you can point me to a devastating response to Kolodny on these points, please do. (Although it will sort of sad; narrow-scoping occupies a central place in my heart, right between my mom and Led Zeppelin III.)
Perhaps we're speaking past each other as it regards evidence and rationality. My view is that it's irrelevant to rationality whether P is evidence and whether I believe that P, unless a) I also believe that P is evidence, or b) P concerns one of my mental states. Otherwise, evidential support is one thing, and rationality is quite another.
With that in mind, I took you to be arguing that we ought to hold Mr. Potter culpable on the grounds that his bad behavior was based on beliefs that were unsupported by the evidence he had. E.g. the evidence consisted of P1...Pn, and Potter believed P1...Pn. And I was suggesting that, no, we should only hold him culpable if his behavior was based on beliefs that were irrational in the narrower sense I've specified, not just in contravention of the evidence he had.
I was also suggesting that among the mental states that might play a role in a "reasoning procedure" of the sort governed by norms of rationality are emotions, appearances, etc. Now, as you rightly point out, if Levy took this on board, this might lead him to a different verdict about the Potter case. But I was saying his taking this on board would be entirely consistent with his view that one is non-culpable in cases where one reasons to action in accordance with the norms of rationality, but is in contravention of the evidence. I thought you were denying this very view, and saying instead that one is culpable for bad actions based on beliefs that are unsupported by the evidence one has, even if the formation of those beliefs is rational in the sense I have in mind.
Does that help?
Dianelos - I'm responding specifically to Levy's argument, which grants that it's possible (if perhaps rare) for people to be blameworthy, and that normative beliefs can be epistemically irrational (contrary to evidence). Your more radical skepticism is beyond the scope of this post.
ReplyDeleteErrol - we seem to have a terminological issue here. If you can avoid violating a requirement by revising the antecedent belief rather than necessarily conforming to the consequent demand, then I'd call that a "wide-scope" requirement. (I guess you're thinking that there's some important difference between exiting a requirement and satisfying it?) But the terminology doesn't matter for present purposes. What matters is that if the antecedent beliefs are rationally revisable in this way (as they very clearly are) then Levy's argument fails in just the way I've described.
Andrew - thanks, that's a lot clearer. Levy is wanting to defend Rosen's view that all culpability originates in akrasia (self-acknowledged wrongdoing), so your in-between view wouldn't satisfy him (so long as you agree with me that it's possible for agents to violate norms of rationality without realizing it).
Focusing now on your more moderate view, I'm worried about your separation of possessed evidence from rationality. If P is undefeated evidence for Q, then it seems that an agent who believes P (and any non-normative 'enablers' for the evidence) could well be rational in reasoning from this to the abandonment of their antecedent ~Q belief. So it still doesn't look like the "can't expect any better" argument is going to work here.
(Just to clarify the dialectic: while I'm sympathetic to the positive view that people are eligible for blame when their acted-upon normative ignorance is in contravention of the evidence, it isn't my aim to defend any such positive view here. I'm just defending the negative claim that Levy's "can't expect better" argument fails. There might be other reasons to hold a view like Rosen's, but Levy's claim that the rest of us are demanding the rationally impossible isn't one of them.)
Errol - we seem to have a terminological issue here. If you can avoid violating a requirement by revising the antecedent belief rather than necessarily conforming to the consequent demand, then I'd call that a "wide-scope" requirement. (I guess you're thinking that there's some important difference between exiting a requirement and satisfying it?) But the terminology doesn't matter for present purposes. What matters is that if the antecedent beliefs are rationally revisable in this way (as they very clearly are) then Levy's argument fails in just the way I've described.
ReplyDeleteIt is uncontroversial that you don't violate the narrow-scope requirement by exiting. The violation state of the narrow-scope requirement is the incoherent state. And so you're right that the narrow-scoper has to think that there is some interesting difference between non-violation (e.g. exiting) and compliance. Otherwise there doesn't seem to be any interesting difference between the narrow-scope requirement and the wide-scope one. As it turns out, this means that the narrow-scoper must deny SDL. SDL doesn't make a distinction between compliance and non-violation, roughly, because it holds that everything that isn't forbidden is permitted.
Anyway, I never meant to disagree about your response to Levy. I'm very sympathetic (as you know) to your type of response. I was simply interested in the side issue as it's been something I've been thinking a lot about over the break.
Richard:
ReplyDeleteStill, if you can’t name the evidence that Dumby could or should have used to change his false and irrational beliefs then your response to Levy doesn’t work. For if somebody as smart as you cannot see that evidence, one cannot expect of somebody as dumb as Dumby to do so, hence one can’t expect better of him, and he is therefore not blameworthy.
But perhaps you can see the evidence, but do not find it worthwhile to describe it here. If so, the problem I have is that I can’t see the evidence either. Actually I don’t think that compatibility with evidence is what makes normative beliefs rational. Rather, as I understand Andrew is saying, the rationality of normative beliefs may have more to do with how well they comport with a particular kind of natural emotion or quality of appearance entailed in the human condition.
Dianelos - see my response to Andrew. (Firstly, even if you think evidence and rationality come apart, Levy needs his argument to extend to cases where the agent is irrational - without realizing it - in their normative beliefs. Secondly, Levy's argument is that we cannot expect better because it would be irrational for the agent to do as demanded. And I've shown that this is false, even in the 'evidence' case: there's a way for the agent to meet the demands, namely by believing on the basis of the evidence, which doesn't require any irrationality on the part of the agent. You might think there's some other sense in which we "can't expect" the agent to achieve this, but again, I'm only addressing Levy's specific argument, which is that we're allegedly demanding the rationally impossible.)
ReplyDelete