I've been wondering: what is morality about? I mean, what is the most fundamental analysis or 'essence' of the concept? Here are the possible answers I've thought of so far...
1) Morality is fundamentally concerned with prescriptive force - what you really ought, all things considered, to do.
The problem with this view is that I really haven't the faintest clue what 'prescriptive force' is, in any categorical sense, or whether it even exists. Same goes for the idea of what you 'really ought' to do (well, it's the very same idea, I think). I just cannot imagine what those words are supposed to mean. So long as I don't think about it too much, I can use the words competantly; I know how to use prescriptive language. I just don't get what it means - what sort of claims I'm making in doing so. (I'm tempted to go non-cognitivist on this one. But not on morality; see below.)
2) Morality is about acting on reasons, or doing what you have most reason to do.
Perhaps (1) just means the same as (2)? I can at least understand this one, I think. Though if reasons are anything like I suggested in the linked post, then this would lead to ethical egoism, which is very different from how morality is usually understood. Perhaps that simply shows that my previous post was wrong? But if reasons are something else entirely, then I'm not sure I understand them after all.
It seems obvious to me that a selfish rogue could easily have reason to act immorally. For some people, given their aims/desires, it would be downright irrational to do the right thing. So I think it's a mistake to conflate (individual) rationality and morality in this way. I much prefer the next option...
3) Morality is about what's good for everyone, socially rational, or justified from an impartial point of view. (I take those three to be equivalent.)
One might complain that any given individual won't necessarily have reason to act altruistically. But given my earlier remarks, it should be clear that I consider this a point in favour of this analysis!
[Updated to add:]
4) Norm expressivism. Moral language is for expressing the mental state of accepting a norm.
I think this is far and away the most plausible form of moral non-cognitivism. But that's not saying much. I still think it's silly to deny that moral beliefs are, well, beliefs, i.e. have cognitive content and so can be true or false.
Can you think of any other options? Which analysis do you prefer, and why?
Saturday, March 12, 2005
12 comments:
Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)
Note: only a member of this blog may post a comment.
Subscribe to:
Post Comments (Atom)
Tough question. (3) is going to have problems too, thoguh, because anti-consequentialists believe in morality that does not answer to (3). Some people think that it's wrong to kill people, period. And this is an agent-centered restriction -- it's wrong to kill people even to prevent other people from killing people. So some people deny that morality is impartial in that way.
ReplyDeleteSee The Case for Objective Morality for the common basis of any rational moral system (causality and science).
ReplyDeleteRichard, Prescriptive force shows itself when you recognize that what you ought to do is not the same as what you at the moment most palpably desire to do. You see the old lady drop her fat wallet. The strong urge to take it flares up inside you. Yet you realize that you ought not, i.e., that your desire not to do such things to old ladies is stronger, though perhaps not as palpable at the moment. (This holds in the case of the psychopath who doesn't have that stronger desire.)
ReplyDeleteNon-cognitivism won't work. Psychopaths are living refutations of it. "I ought" can refer to facts about desires widely shared in my society. Is there some reason to think that a society of people that lives according to its widely shared set of altruistic and self-interested desires may be living as it ought not? No. And when anyone in the society gets a momentary urge to steal from old ladies, this urge is inconsistent with that set, creating the normative force.
Objective realists will want you to identifying "real" prescriptive force with "independent of any desires" and will poo-poo the notion of prescriptive force I just sketched. Well, they can poo-poo it as they wish. It's still a real force. They want a Big Force (whatever that is). But they can't say why we need one.
Jonathan - is (3) necessarily inconsistent with agent-centered restrictions? I agree that consequentialism fits best, but it might also be possible for deontological rules to be "justified from an impartial point of view"? I'm not sure though. I'm tempted to just say, "so much the worse for deontology!"
ReplyDeleteJack - judgments of what, though? You seem to be suggesting they don't actually correspond to anything real whatsoever - a view known as 'error theory' or nihilism. But I think it isn't difficult to demonstrate that notions of good/bad and better/worse do indeed have a meaningful basis in reality. Watch out for my upcoming post on that topic.
Jim - I think I agree with you; I have no problem with prescriptive force insofar as it arises from an agent's own desires. (Follow the link in the main post.) It's the categorical sort, or "Big Force" (as you put it), that I can't comprehend.
I updated my post to (hopefully) clarify that I didn't mean to suggest I was a non-cognitivist about morality. But I do think it's plausible with regard to talk about "Big Force" prescriptivity. Norm expressivism might make sense of it? Or perhaps simple nihilism is the better option - there's no such thing as "Big Force" prescriptivity, so all claims to the contrary are simply mistaken.
Richard, I see.
ReplyDeleteYes, go with that nihilism. The key is that if the man on the street means by normative force what the Big Force people do, you're sunk and naturalism fails. You would then have to choose full-blown nihilism about morality or accept Big Force non-naturalistic objective realism. I maintain that the man on the street agrees with me that motivation not to do wrong to the old lady boils down to your not desiring her to suffer harm. ("I would hate to see her crying over her lost wallet later on.") He doesn't think of Big Forces. ("I don't care about the old lady, I just don't like to do anything Wrong," is something he doesn't say.) Thus naturalism goes through and the Big Force objective realist is left out in the cold implausibly maintaining that you need Big Force to avoid full-blown nihilism about morality.
Followed the link, and I agree. Note that the psychopath (devoid of altruistic desires) can do wrong even though he has no reason to avoid doing so. This is because he violates his society's shared desires. Morality isn't individualistic. It's social. I think you need to amend your "agent's own" (that you just wrote to me) along those lines. It's not a radical change to your view, anyway.
I've continued this discussion in a new post...
ReplyDeleteI don't think Morality can be considered objective.
ReplyDeleteI think moral judgements do in some sense entail prescriptive statements, if you agree with Hare that they must be universalizable. At least moral principles, you can have particular based upon those principles.
But this is not all there is to it. Reasoning is certainly involved, otherwise we couldn't rationalise and justify moral decisions like abortion, and I don't think the non-cognitivists have it right in suggesting that you are not expressing a belief about something, if you say abortion is wrong you clearly believe something about some quality of the act of abortion or its consequences.
Reason alone does not account for moral judgements because if this were the case, two fully rational beings in full grasp of the facts of the universe would come to the same conclusions about every moral issue, and clearly that isn't the case for then differing moral viewpoints would soley be a result of varying degrees of ignorance and irrationality.
What is missing is the place of values, at the core of moral judgements is value judgements, non truth assesable and mostly non-debatable.
I'll be writing about this soon on my blog.
I think strongatheism.com has it right in saying: " The unit of ethics is values"
I think you should careful not to confuse morality with normativity. I take morality to be a system of behavioral norms that guides action and structures social interaction. You need to leave it as an open possibility that people sometimes have no reason to be moral, or that the claims of morality sometimes lack normative or binding prescriptive force. And what is good for everyone or socially rational may turn out to be radically different than the body of norms we think of as morality. So, again, you shouldn't force a fit between socially rational practices and morality by defining one in terms of the other. You might propose a revisionist conception of morality, but then you need to be clear that what you want is for people to use language differently than they do. Morality, like etiquette, is exactly what it is. Whether and when you should care about it is the interesting question.
ReplyDeleteIllusive Mind - I'm not so sure that factually-omniscient people would still have any moral disagreements. Perhaps they would disagree about what they prefer to happen, but surely all would agree on what is best (for everyone overall), and I think that is just what 'morality' means. (Perhaps they could also disagree over the meaning of the word 'morality', but that would not be a very interesting sort of disagreement! Once you fix it on one interpretation or other, the moral facts must be determined by the natural ones, so all would be in agreement.)
ReplyDeleteI think value is central to ethics. But by this I mean the objective sense in which something is or is not of value to us (i.e. whether it fulfills desires in actual fact). I do not mean mere individual preferences (as I think you do). Morality is not agent-relative.
Jason - I thought inanimate objects did not act at all? So they surely cannot act im/morally. None of this seems to depend on making reasons fundamental to our moral concept.
Will - I very much agree that "people sometimes have no reason to be moral", I think I said as much in the main post. But I do want a stronger concept than mere 'societal norms'. We can (and, I think, regularly do) separate sociological morality [the principles a group accepts] from philosophical morality [the principles they ought to accept]. There's nothing particularly revisionist about discussing the latter concept. (No more than it would be to discuss orange-the-colour rather than orange-the-fruit!)
Jason responds [via email, as comments were down]:
ReplyDelete"I should have said "nonliving" rather than "inanimate," and the point is a good one if I do: We cannot speak of the morality of a thunderstorm or even of a wristwatch. (For manmade objects, we must speak, I think, of the morality of their creators and/or users. To talk about an interior morality possessed by the objects is difficult, however.)
What all of this has to do with reason is that nonliving objects do not make reasoned choices to the same degree that we do. In proportion to an object's ability to make reasoned choices, we may say that a morality exists for that object. I'm drawing here on Daniel Dennett, particularly in the last few chapters of _Darwin's Dangerous Idea_."
Minor semantic quibble: my understanding is that action requires intention. Things like thunderstorms or muscular spasms aren't actions, but merely events.
Substantive response: Now, I fully agree that morality only applies to agents, not thunderstorms and the like. (We can judge the tsunami to have had bad consequences, of course, but we cannot hold it responsible or blameworthy - the very idea is nonsensical.)
But I don't see how any of this specifically supports conception #2. We can agree that only agents are subject to moral praise or blame, and yet combine this thesis with any of the views described in the main post.
I don't think it shows that (2) is necessary. We all agree that morality requires agency/reason, but it does not follow from this that morality is fundamentally about acting on what you have most reason, all things considered, to do.
ReplyDeleteIntentional betrayal also requires reason. Of course, no-one would think to define 'intentional betrayal' in terms of what you have most reason to do -- even though it could be made equivalent according to some [perhaps implausible] conceptions of reasons.
Also, note that (2) cannot appeal to "right reasons" - that would be circular, as the 'right' is precisely what we are trying to analyse. (2) simply asserts that morality is about acting on whatever you have the most reason to do. This makes morality far too dependent on our conception of reasons. According to instrumentalists about reason (such as myself), morality would degrade into egoism, which it clearly is not. As I said in the main post:
"It seems obvious to me that a selfish rogue could easily have reason to act immorally. For some people, given their aims/desires, it would be downright irrational to do the right thing. So I think it's a mistake to conflate (individual) rationality and morality in this way."
I think morality is (fundamentally) about what's good for everyone. It cannot be assumed that individuals will necessarily have the most reason to act on this, however. So morality cannot be identified with what we have most reason to do.
This is a thoroughly engaging post. I too am interested in this question of morality.
ReplyDeleteThere are I think a couple caveats that must be made or at least that i've been made to understand.
(1) We live in the real world and so any theory must account for and apply substanitively for reality.
(2) Since overgeneralizations and purely epistemological properties are hard to analyze (and understand) effectively, we must be concrete about what we are analyzing e.g an example that highlights the particular aspect of morality to be analyzed.
I had at first thought to engage the ideas of "good","evil" and "just" but these proved unwieldy, so I was referred to agency by a teacher. His example was as follows:
There are two cases:
C(1) - A man is driving down the street in his residential neighborhood in a brand new Ferrari, as he reaches the end of the street his neighbor busts out of his front door with a some sort of gun and riddles the car with bullet holes, effectively totaling the car.
c(2) - A man is driving down the street in his residential neighborhood in a brand new Ferrari, as he reaches the end of the street a tree at the corner falls and lands on his hood, effectively totaling the car.
How do we analyze these cases or more specifically how ought the man to respond in these cases? Minimally of course we assume some sort of emotional outburst. Here are some salient points, the man had no knowledge that his car was going to be totaled (at least we assume this), the car is totaled.