Sometimes, when reading a blog, you may get the feeling that the blogger's posts are infused by a fundamentally misguided assumption. But such deep-rooted disagreements can't typically be raised within the scope of any particular post. So consider this open thread an invitation. Do you find yourself raising an eyebrow at some of my basic presuppositions? Any disagreements that run so deep you wouldn't even know where to start? Try here!
Saturday, January 02, 2010
January Open Thread
Given that I've just offered a kind of global summary of my views, it seems like a good time to invite readers to identify and discuss any points of general disagreement. Time, that is, for an open thread:
Related Posts by Categories
5 comments:
Visitors: check my comments policy first.
Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)
Note: only a member of this blog may post a comment.
Subscribe to:
Post Comments (Atom)
There are a few things
ReplyDelete1. I suppose that you might want to talk about aggregation. (Or you could point to a post where you have)
Of course, this is going to the very heart of consequentialism, but I think aggregation is not adequately justified. Given that welfare refers to the stuff that is desirable for one's sake, it is not clear that I can simply add up the stuff that is desirable for my sake to the stuff that is desirable for your sake, so on and so forth. It is also not clear whether we can add the extent to which I have the things which are desirable for my sake to the extent you have your desirable things. Of course, everything else being equal, we would both agree that the state if affairs would be better if I had more of the stuff that was desireable for my sake, but that in itself is not aggregation. From the fact that the state of affairs would be better if my welfare was better, it does not tell us anything about how to handle conflicts of welfare.
As an alternative to aggregation, I would argue for something that is inspired by Kant. For any conflict, it can be resolved in 1 way or the other. We could then ask whether it was desirable for my sake that there be a universal law that required similar conflicts to be settled in in one particular way rather than another.
2. Another thing you could further look at is the issue of fittingness and reasons. In your fittingness and fortune post, I mentioned something about prisoner's dilemmas, ideal agents and fittingness. What is your take on it? I have done a massive post on my own blog
1. I agree that we can't just aggregate welfare, at least when comparing cases of differently-sized populations. (See my 'Welfare and Contributory Value'.) But we can probably aggregate in same-number cases. It seems to me that this is what rational agents would choose from behind a veil of ignorance, for example.
ReplyDelete2. Can you summarize your point in a sentence or two?
1. Aggregation may be okay for some small or moderate level of harm, but it might still be rational to have some risk averseness. It seems rather irrational to (for example) sign up for the survival lottery even though the expected pay-off is positive. That means that I may be rational in wanting to avoid the more severe (and not utterly imprbable) consequences completely. (or reduce their risk to very very low). The question of which option to take is actually indeterminate. (in many ways, a lot like Newcomb's problem)
ReplyDelete2. One or 2 sentences will be difficult, but I'll try. Basically it started off with some thoughts that coalesced towards the end of a paper I wrote for a module that I took under Kyle Swan (I believe you know him? He used to blog at Pea Soup) Basically, I was saying that given the buck-passing theory of value, a properly idealised agent would know what was objectively valuable and desire it. However, we have a problem in that since we are neither perfectly rational, nor know all the relevant facts, we cannot know what the ideal agents would know/desire. The core idea is that it is logically possible (i.e conceivable) that a society of such ideal agents/evaluators would exist, and this places a limit (over and above the standard limits regarding how consistent is one (set of) goal(s)/desire(s) with another). The concept was refined over time, and eventually it became more like this.
Now, to get to what the post is about. Your post on fortunate and fitting dispositions got me thinking. Now, for us normal people, what is fortunate and what is fitting as far as dispositions go does come apart. This, is due to the fact that we often have to compensate for our inadequacies. However, it shouldn't come apart for my ideal agents. However, it seems like the prisoner's dilemma is one in which it will come apart for rationally self interested agents. Since that is impossible, it has to be the case that being purely self interested is not rational. As a general method, then, we can apply the categorical imperative (formula of universal law) to any putative "value" to determine if and to what extent it would be valuable.
Why do you assume that ideally rational dispositions are thereby also the most fortunate dispositions to have? Just suppose the world is such that being irrational in a particular respect will lead to great rewards. This is clearly possible...
ReplyDeleteJust suppose the world is such that being irrational in a particular respect will lead to great rewards. This is clearly possible..
ReplyDeleteThe way I see it, the only way in which this would obtain is if
1. We are inadequate in some way. e.g for consequentialism and hedonic paradoxes, because we are incapable or too time consuming to calculate the value of consequences for each action, we would maximise utility better if we adopted dispositions that did not actually respond to any genuine reason giving thing. Similarly, with hedonic paradoxes, it is merely the case that we seem to be incompetent at directly pursuing pleasure
2. The world is rigged by some demon/monster so that everything would be destroyed unless you acted irrationally. (And this may involve taking a pill...)
But assuming that an individual is fully competent in a means ends kind of way, and there are no demons and monsters screwing things up, then, a fully rational and knowledgeable actor would do the best possible sequence of actions that would achieve his goal (or come as close to achieving it as possible) In this sense, his dispositions are fortunate. His dispositions also have to be fitting (because I specify in advance that he is rational, and only responds to reason giving objects) i.e. while they could possibly come apart, they do not always come apart and barring incompetence and/or monsters, fortune and fittingness shouldn't come apart.