I previously argued that practical reasoning is intimately tied to actions, not act-sequences. One possible response (which I owe to Doug Portmore) is to say that our practical reasoning can conclude in an intention to φ, where "‘φ’ can stand for either some basic act or some act-sequence which contains a sequence of basic acts performed over some temporally extended time period."
I'm happy to countenance complex actions, like firing a gun or driving to the store. Despite comprising "a sequence of basic acts performed over some temporally extended time period", complex actions are not mere sequences. They're things we can do with a single 'exercise of agency', so to speak. That is to say: even if they won't be completed until a future time, they're still things I can decide to do now, and my decision will be effective. Complex actions may thus be included among the options available for my present self to choose.
Not all act-sequences are like this, because how we act in future is not always under our present deliberative control in this way. Sometimes, the intentions we form now will (predictably) fail to prove effective. I take it that this is what's meant to be going on in Jackson's famous "Professor Procrastinate" case. PP might fully intend to complete the book review on time when he agrees to do it. But he won't achieve this: as his track record has established, time and again, when it comes time to actually write the review he will find some excuse to delay. His prior good intentions come to nothing. Since he should have known this would happen (and, the case stipulates, it's worse to delay a review than to immediately decline it), Actualists argue that he should not have agreed to review the book. Intrapersonal co-ordination problems are in this way normatively similar to interpersonal ones: when you know that your future self is a defector, only a fool co-operates. You should do what will actually be for the best, not what could possibly be best, if only the other agent would (contrary to fact) cooperate.
Returning to the original question: Can we have reasons for [intending to perform] mere act-sequences in addition to reasons for [intending] actions? First, I reply that we can have reasons for intending complex actions (i.e. sequences that are within our present deliberative control). Can we further have reasons for intending mere sequences? This is less clear to me.
There seems something defective about forming predictably ineffective intentions. If you know that your future self will override your present intention to φ, how can you maintain the intention? (You might hope to trick yourself into forgetting that you won't actually φ, as one might attempt in response to Kavka's toxin puzzle, but I take it this is just a case of attempting to manipulate oneself into holding an irrational intention.) What about an intermediate case, where your present intention has (say) a 50% chance of proving effective? Perhaps you could then maintain the intention, in a hopeful spirit; though whether one should do so will presumably depend on the expected utilities. For example, if the expected value of accepting the book review, given a 50% chance of completing it on time, exceeds that of declining, then that's what Prof. Procrastinate should do. Otherwise, he should still decline.
So I guess we can, in this way, find a modest place in our ethics for long-term intentions of questionable efficacy. But their deontic status is largely determined by the agent's present scope of control. In particular, the mere fact that an act-sequence would be best is not sufficient reason to intend to do it, if the early steps are actually far more likely to lead to disaster. On the other hand, the actualist can certainly endorse forming effective long-term intentions. As I emphasized in my original response to Portmore, Actualists insist that we pick the best of our present options, which may include protecting against future vice. For example, our present options may include the forming of resolutions or long term plans or commitments that can reasonably be expected to sway our future behaviour. Actualists will straightforwardly advise choosing such options when they would lead to the best outcomes.
The dispute, I take it, is whether one should presently intend (and begin to carry out) an intrinsically desirable course of action that would unfortunately be averted in disastrous fashion by one's future choices. The possibilist says yes: it suffices that one's future self could have chosen to stay on the path to utopia -- that is reason enough for one's present self to start on this path. For the actualist, it makes all the difference in the world whether one's present good intentions will actually prove effective. If not -- if, despite your present best efforts, your future self will choose to defect and turn you down the road to hell instead -- then you'd best avoid this imprudently risky journey altogether. Good as the utopian act-sequence might be, it is not an option (action) available to your present self to choose. To nonetheless advise planning and acting as if it were, is (it seems to me) to advise delusion.
Hi Richard,
ReplyDeleteHere are some questions:
(Q1) Does Professor Procrastinate have "present deliberative control" over whether he later writes the review? I think that you want to say "no," right? Is it your view, then, that PP can't have a reason against intending both to accept the invitation and to never write the review?
(Q2) If the sort of control that's relevant to deontic assessment is present deliberative control, then why wouldn't you also say that this is the relevant sense of control in identifying what an agent could do. The case stipulates that PP could (in the relevant sense) accept the invitation and write the review. This seems to force you to say that the sense of control that's relevant in determining what PP ought to do is distinct from the sense of control that's relevant to determining what the agent's available options are. Isn't that odd?
(Q3) When you talk about a 'present option', does that include everything that the agent could do simultaneously at the present moment or just one of the acts that the agent could do at the present moment? Suppose that I could, at present, step on the brake and turn the wheel to the left and that I could, at present, step on the brake and turn the wheel to the right. Is stepping on the brake an option or just a part of an option?
(Q4) Suppose that I must decide whether to step on the brake and that what I would do if I were to step on the brake is turn the wheel to the right, which would be very bad. Assune that I always turn the wheel to the right when I step on the brake. Assume, though, that I could step on the brake and turn the wheel to the left, which would be very good. Should I step on the brake? I realize that I should step on the brake and turn the wheel to the left. But that's not my question. I'm asking whether I should step on the brake. What's the answer to that question?
(Q5) What do you want to say in the case where Professor Procrastinate has every reason to believe that he will write the review if he accepts the invitation? Assume that as a matter of fact he would not write the review were he to accept. Should he accept the invitation?
(Q6) When you use words like 'should' and 'ought' in your post, is this the objective or the subjective 'ought'?
(Q7) Why are you talking about advising in the last sentence of your post? Possibilists are advising people what to do, they describing what it that makes it the case that one ought to do something: viz., the fact that they could and doing so is best.
Some answers...
ReplyDelete(A1) What PP has control over depends on the details of the case. For example, PP may have asymmetric control: it may be that even if he can't now form an effective intention to write the review on time, he could form an effective intention to (both accept and) not review the paper. If this is, in this way, an option for them, he certainly shouldn't pick it. And if it isn't an option, well he shouldn't pick it then either, for more trivial reasons.
(A2) This is the core issue. I absolutely would say that I'm talking about "the relevant sense of control in identifying what an agent could do". The sense in which PP "could" accept the invitation and write the review is a different, derivative sense. It is just to say that each of the component actions (accepting the invitation, and writing the review) are options that PP has at various times. It is absolutely not stipulated as part of the case that the conjunction is itself an option available for PP to choose at some time.
As I wrote in my earlier post: "The central flaw in possibilism is that it conflates what is in an agent's control at some time or other with what is under the agent's control at the current moment of decision."
Once we make this distinction, Possibilism starts to look very unmotivated. Like saying that because the individuals in a Prisoner's Dilemma could have cooperated, "they" should have. There is no single choice-point from which mutual cooperation can be chosen. Each chooser must do the best they can -- which may well be to defect, if that's what the other would freely choose regardless. The sense in which the pair "could" mutually cooperate is not the relevant sense of "could" for determining the options of a choice.
(A3) I would want options to be complete options, so that stepping on the brake is just part of an option. (Though to be a presently-available option, the action does not have to occur at the present moment. It just has to be chooseable at the moment.)
(A4) I previously answered a structurally identical question here.
(A5) Ambiguous. Yes rationally. No objectively.
(A6) Again, this doesn't matter. I tend to be more interested in the rational ought, but nothing hangs on this. (You can re-word my post as necessary to translate it into a more objective idiom.)
(A7) Terminological. When someone asks "Ought I to phi?", and your moral theory implies that the correct answer is 'yes', then we can say that your theory 'advises' phi-ing in those circumstances. You can use different words if you like.
Hi Richard,
ReplyDeleteMost of that is helpful. Thanks. A few more questions, though:
(Q8) Is the following your idea: S, at t1, has present deliberative control over whether or not he does X at t2 iff both of the following are true: (1) if S intends at t1 to do X at t2, this will result in S's doing X at t2 and (2) if S intends to refrain from doing X at t2, this will result in S's refraining from doing X at t2. If not this, could you define the notion of ‘present deliberative control’ for me.
(Q9) In the case where PP has asymmetric control, what is it that he has control over? It sounds like you’re saying that he has control over something that’s inevitable: viz., his refraining from writing the review, which is something he will do no matter what he does or doesn’t intend to do with regard to writing the review.
(Q10) I didn’t find your response to Q4 helpful. In what way is the question ambiguous? What are the two possible questions that I might be asking? In the link, you seem to suggest that whether you ought to do something is relativized to the option-set? In that case, which option-set is relevant to whether or not I should step on the brake? I’m also having trouble because you say that by ‘options’ you mean complete options, but then what you say in the link doesn’t make much sense, because what you refer to as options there are not complete options. In any case, I want to know whether I objectively ought to step on the brake. I don’t see how that is ambiguous. And if you tell me that, relative to one option-set, I should and, relative to another option-set, I shouldn’t, I want to know whether I objectively should act as I should relative to the one option-set or as I should relative to the other. Can you give me any ordinary language cases where ‘should’ or ‘ought’ is ambiguous in the way that you’re suggesting? In any case, which is the one that a fully-informed and perfectly conscientious and rational agent would act in accordance with? After all, such an agent can’t act one way relative to one option-set and another way relative to the other option-set.
-Doug
(A8) I just had the intuitive notion in mind here, but your analysis would probably suffice for many purposes. I could say more if I had a better idea of where you're going with this: do you have a particular problem in mind?
ReplyDelete(A9) Right, but such overdetermination is familiar from Frankfurt cases.
(A10) Here are two possible questions:
(i) "Between the options of braking as I actually would [namely, turning right], and not braking, which do I have better reason to do?" Answer: obviously, not braking.
(ii) "Between all of my options at this time, is the one I ought to do one that involves braking in part?" Answer: obviously, yes.
I don't see any further question here. You ask, "But should I brake, simpliciter?" And I answer: you've only specified part of the option, so you've failed to ask a complete question. Yes, you should brake left-wise. No, you should definitely not brake right-wise. That's all there is to say. There's no substantive further issue here.
This is perfectly ordinary English.
ReplyDeleteQ. "Should I step on the brake?"
A. "It depends how you do it!"
A conscientious agent will consider all the complete options available to them at this time, and so will brake left-wise. There's no puzzle here.
Regarding A8, I just want to know what you have in mind.
ReplyDeleteRegarding A9, what is it that he has control over? Is it your view that he has control over whether he will refrain from writing the review but no control over whether he writes the review?
Regarding A10, do you deny that the decision to step on the brake pedal is a separate decision from the the decision to turn the steering wheel to the left? I don't see how braking left-wise is a way of breaking. I can see that braking hard or braking soft are two ways of breaking, but I don't see how stepping on the brake pedal while turning the steering wheel to the left is a way of stepping on the brake pedal.
Maybe this case will pump your intuitions. Suppose that Smith will start running depending on whether I raise my right arm at t2 and that Jones will start running depending on whether I raise my left arm at t2. Isn't the decision to raise my right arm at t2 a separate decision from my decision to raise my left arm arm at t2? They seem like separate decisions to me. And if I have to decide whether to raise my right arm at t2, then I'll need to ask whether I should raise my right arm at t2. I can ask whether I should raise my right arm at t2 if I'm also going to raise my left arm at t2, but what if I'm not going to raise my left arm at t2?
What happens if it turns out that the mind consists of various streams of consciousness and that sometimes a single human body performs two simultaneous actions as a result of decisions made by different streams of consciousness. Think of those cases where the corpus callosum has been severed. Should those be treated like interpersonal cases?
Doug - For my purposes the crucial point is that the decisions aren't normatively "separate" or independent of each other. They're best decided together, for whether you should brake or not depends on whether you will turn left or right.
ReplyDeleteWe might say, "Given that you'll turn right, you ought not to brake." This is just to say, what even the possibilist agrees with, that between the options of braking and turning right, or not braking, the agent has better reason to not brake. It's not a very interesting claim, because the agent has no reason to be interested in that limited option set. Put another way: there's no reason to take his turning right as 'given', since it is within his present control.
To deliberate about whether or not to brake is thus the wrong question for the agent to ask himself. (This is a point I made in my very first comment in the old thread.) He ought to be deliberating about what to do between all the presently available options.
You keep bringing up forms of the 'selection' question, when (as previously explained) the only real issue of interest in the Actualism-Possibilism dispute is the evaluation question. (If you disagree with this meta-assessment, I hope you will respond to my linked comment.)
You write, "there's no reason to take his turning right as 'given', since it is within his present control." And I would say something similar about PP. There's no reason to take his not writing the review as 'given', since it is within his control. Sure, it's not under what you call his "present deliberative control." That is, there's no intention that he can form now that will ensure that he'll write the review in the future. But there are intentions that he could form in the future that would ensure that he'll write the review. With regard to whether PP should accept the invitation, you believe that there is no difference between the original PP case and a variation on that case in which PP has no control (at any point in time) over whether he'll write the review. Is that right? For you, the fact that he could accept the invitation and write the review in the original case but not in this variation on the original is irrelevant with regard to whether he should accept the invitation.
ReplyDeleteNow, at this point, I think that you'll want to say that S can't have objective reasons to form the intention to φ (e.g., the intention to both accept the invitation now and write the review later) unless that intention will ensure that S will φ -- after all, you say that the decision has to be "effective." Is that right? So if my intending to exercise this afternoon isn't going to ensure that I will exercise this afternoon, then I have no reason to intend to exercise this afternoon even if it would be very good were I to do so? Is that right? So every case in which I intended to φ but then failed to φ are cases in which I had no reason to intend to φ in the first place, for my decisions in each of these cases were ineffective. Is that right?
And why is this? Why is it that whether I objectively ought to intend to φ depends not on whether my φ-ing would be better than any other available alternative, but on whether my φ-ing is both an alternative that is under my present deliberative control and the best among those options that are under my present deliberative control?
And let's not bring issues of knowledge, prediction, and subjective probabilities into it. Let's just talk about what agents objectively ought to intend to do.
I wrote a massive comment here but it’s too long for the blog. This should tell me something! In a nutshell:
ReplyDeleteI think practical reasoning may be best expressed as relating to “efficacious action”: what you believe you can do to generate success. This can be long-term or short-term: ‘basic’ actions (which are not basic) and intentions to ‘mere’ act-sequences (which can be more significant than that word suggests).
I think the psychological suspension of the ability to re-evaluate during a ‘basic’ action (which all do take ample time for reconsideration) and the self-deceiving belief that an intention binds us at some later point in time are similar. Sartre might call them examples of Bad Faith.
A policeman decides to shoot a criminal. This requires many interim physical actions between the decision and the point of no moral return (pulling the trigger). But we can’t re-evaluate every tiny physical movement, we’d never get anything done. But as radically free beings we COULD. We supress the awareness that we are free to choose differently whilst carrying out a basic action. In the policeman’s view, he decides-he shoots. He doesn’t realise, but there is a decision gap. Act-sequences can be the same – we deceive ourselves that we aren’t free to go against our intentions at any time. If intentions (long-term or short) didn’t FEEL binding when we make them, we’d never make them.
Our morality needs to understand Bad Faith, which is essential for daily life (sorry Jean-Paul). If it doesn’t we could end up with conclusions like “making promises is morally wrong”, because promises involve tying a free agent into a situation where they feel bound to act a certain way.
So in my opinion we CAN have good reasons for entering into act-sequences and yet we CAN be morally judged for not thinking through those act-sequences with a suitably objective rigour (like PP should do). It’s definitely a tension between how free we are, how free we feel, and how free we’d like to be (Sartre says we are condemned to be free) and it is one of the central paradoxes of what it’s like to be (or at least, to believe ourselves to be) free agents.
Hi Richard,
ReplyDeleteLet me take another stab at the Tony case.
First, two assumptions:
(A1) Intending to φ is not an intentional action. That is, one does not intend to φ by intending to intend to φ.
(A2) A maximal conjunctive act performed at a time t is one that consists of all the acts that the agent could intend to perform simultaneously at t such that there is no additional act that the agent could also intend to perform at t. (This is, I believe, what you call an ‘option’.)
Now, the case:
Suppose (1) that it is now t1, (2) that Tony intends at t1 to hit Valerie at t2, (3) that if Tony intends at t1 to hit Valerie at t2, he will hit Valerie at t2, (3) that Tony does not intend at t1 to go to the gym at t5, (4) that Tony will not subsequent to t1 form the intention to go to gym at t5 and, thus, will not go to the gym at t5, (5) that if Tony did intend at t1 to go to the gym at t5, he would go to the gym at t5, and (6) that Tony will kill William at t10 if and only if Tony neither hits Valerie at t2 nor goes to the gym at t5.
Given 3-6, it follows that unless the maximal conjunctive act that Tony intends at t1 to perform at t2 includes hitting Valerie, Tony will kill William at t10, for he’s not going to go to the gym given that he neither now has the intention to go nor later will form the intention to go.
Do you think that Tony is wrong in intending at t1 to hit Valerie at t2? Do you agree that the sequence of subsequent acts that Tony would perform if he were to intend at t1 to perform a maximal conjunctive act at t2 that includes his hitting Valerie is better than the sequence of subsequent acts that Tony would perform if he were to intend at t1 to perform a maximal conjunctive act at t2 that didn’t include his hitting Valerie? That is, do you agree that what would happen if Tony were to intend at t1 to perform a maximal conjunctive act at t2 that includes his hitting Valerie is better than what would happen if Tony were to intend at t1 to perform a maximal conjunctive act at t2 that didn’t include his hitting Valerie?
P.S. Have you read Holly S. Goldman’s, “Doing the Best One Can.” It seems that a lot of what you say might be inspired by her article. Thus, I thought that it would be good to give you an example like the one above, which is inspired by one of her examples.
Hi Doug, no, I haven't come across Goldman -- thanks for the pointer. (I take myself to be offering the most natural consequentialist line, so it's not surprising that others would have previously offered similar suggestions.)
ReplyDelete"Do you think that Tony is wrong in intending at t1 to hit Valerie at t2?"
Yes, straightforwardly: his choice (intention) at t1 should instead be to go to the gym at t5.
"do you agree that what would happen if Tony were to intend at t1 to perform a maximal conjunctive act at t2 that includes his hitting Valerie is better than what would happen if Tony were to intend at t1 to perform a maximal conjunctive act at t2 that didn’t include his hitting Valerie"
Yes, this is stipulated as part of the case. But again, it isn't relevant, because Tony should not just be deliberating about what to do at t2. He should be deliberating about all his options, and one of the options presently available to him in deliberation is the option to go to the gym at t5. He could now form this effective intention, and indeed that is precisely what he should do.
"Why is it that whether I objectively ought to intend to φ depends not on whether my φ-ing would be better than any other available alternative, but on whether my φ-ing is both an alternative that is under my present deliberative control and the best among those options that are under my present deliberative control?"
ReplyDeleteThe are a couple of ways to reach this conclusion.
Route 1: building down from options. As a consequentialist, I think the correct decision to make at any given decision node (or "choice point") is to choose the option that would be best. You can choose an option if you can form an effective intention to carry out the act in question. Otherwise, you can't. The correct intention to have is just the effective intention to carry out that option which is best (correct to choose). An ineffective intention cannot meet this correctness criterion.
Route 2: building up from subjective oughts. It's obviously irrational to maintain a manifestly ineffective intention. If you know you won't φ, you can't reasonably intend to φ. Objective oughts roughly correspond to what the agent should do given knowledge of all the relevant facts [modulo Shope's conditional fallacy worries]. So, agents objectively ought not to have intentions that will in fact prove ineffective.
For an analogy, consider the interpersonal case: "Why is it that whether we objectively ought to intend to φ depends not on whether our φ-ing would be better than any other available alternative, but on whether our φ-ing is both an alternative that is under my present deliberative control and the best among those options that are under my present deliberative control?"
The answer, obviously, is that for any given decision node, the choice to be made is between the options that are available here, to this momentary decision-maker. Whether others cooperate or not is not something that one has control over insofar as one is making this decision. Whether those "others" are numerically distinct people or just future stages of yourself, it makes no decision-theoretic difference to the choices you have before you now.
(And yes, the split brain cases should likewise be treated as interpersonal cases, insofar as they lack control over what the other half will do. They are independent deliberators, which is what matters.)
When specifying outcomes in a thought-experiment (or after the fact), don’t we need to avoid the pitfall of giving too much deterministic weight to those outcomes compared to how we should view them before the fact – undecided until the moment of choice?
ReplyDelete“Tony will kill William at t10 if and only if Tony neither hits Valerie at t2 nor goes to the gym at t5” is a construction that makes perfect logical sense but misleads us as to the inevitability of Tony killing William at t10 when viewed from before that step.
Three options that I can see:
(Using φ as Tony killing William)
1. φ is not inevitable but a free choice made by Tony at that time. This simply undermines the validity of applying a moral judgement to φ-n retrospectively. Just because φ was what happened, did not mean that nothing but φ could have happened. If this is true we cannot apply any culpability for φ to choices at φ-n as φ does not necessarily entail from those steps.
2. φ is not a free choice but a physical necessity precipitating from previous steps. For example, in φ-n Tony sets a chain of physical processes in motion that cause the killing without further intervention. “Lighting the fuse and standing back.” If this is true then φ does not hold any moral significance. Tony’s “present deliberative control” somewhere in φ-n should be considered as culpable for the mere occurrence of φ.
3. φ is a “free choice” in name only and given φ-n could not have been made any other way. φ-n necessarily entailed φ deterministically. We deny any ability by Tony to affect the chain φ-n->φ. In this view I don’t think it makes sense to hold Tony (or any agent) morally culpable for anything.
When we talk about necessary outcomes in example cases it seems to me that consequentalists (who probably prefer option 1 above?) sometimes fall foul of a logical misstep – considering that just because φ-n did entail φ, that φ-n necessarily entailed φ. If there is a genuinely free agent choosing φ, don’t we have to give this backwards thinking up?
Hi Richard,
ReplyDeleteYou make some good points. You've certainly given me a lot to think about. I'm not sure that I accept either of your Route 1 or Route 2, but I need to think about it some more.
While I'm thinking, could you tell me what, on your view, determines whether S ought to intend at t1 to φ at t2. I think that most actualists would accept T1. But you wouldn't, right? Perhaps, you would accept T2. If not T2, then could you tell me precisely what view it is that you do accept. (I'm not trying to trap you with T2; I think that T2 is pretty plausible -- or, at least, it is if we accept some of your other views.)
(T1) Whether or not S ought to intend at t1 to φ at t2 depends on whether what would happen were S to φ at t2 is better then what would happen were S not to φ at t2.
(T2) Whether or not S ought to intend at t1 to φ at t2 depends on whether or not S’s intending at t1 to φ at t2 is a component of the best maximal set of intentions for S to have at t1. The best maximal set of intentions for S to have at t1 is the maximal set of intentions that would result in acts that would have better consequences than the acts that would result from any other maximal set of intentions that S could have at t1.
Doug,
ReplyDeleteI know that your question was addressed at Richard but here’s my understanding of what he is saying in case this proves useful:
Of T2: I don’t think that this is acceptable, no. The point is that the objectively best maximal set of intentions to have at t1 does not necessarily include only effective intentions. For example, the best set of intentions for PP at t1 is {to intend to accept the invite at t1, to intend to write the review at t2}. But if he knows this intention at t2 is likely to be ineffective, he ought to intend at t1 to do something NOT in the best maximal set (that is, he ought to decline the invite).
That leaves T1. This also seems to miss the concept of effective intentions. It would be better to φ (where φ = write the review at t2) but PP OUGHT NOT to intend to φ at t1 (since he ought to know that the intention will be ineffective.)
I think that ineffective intentions are something of a poser for the consequentialist, not logically, but in using their moralities as practical guidelines for action.
1. At t1, we know that φ will happen at t2 iff φ necessarily obtains from t1.
2. An intention at t1 to φ at t2 does not entail that one will in fact φ at t2
3. An intention at t1 to φ at t2 is effective iff one does φ at t2
4. From 1,2,3: It is impossible to know if your future intentions are effective ones
5. The correct moral choice at t1 is the choice that equals the first of the maximal set of subsequent effective intentions
6. It is impossible to discern the correct moral choice at t1.
Richard, is consequentalism as you see it only a retrospective sort of morality?
Matt,
ReplyDeleteI meant for the best maximal set of intentions to have at t1 to include only effective intentions. That's the point of the following: "The best maximal set of intentions for S to have at t1 is the maximal set of intentions that would result in acts...[emphasis added]."
Doug, my mistake. Apologies for missing what you had specified there. Perhaps Richard would accept T2 then; I’ll defer on this one after all! But here’s why I misunderstood:
ReplyDeleteThe tension for me in T2 is still this idea of this maximal set, the set that “would” result in the overall best outcome. Surely the set that includes the EFFECTIVE intention ‘to write the review at t2’ WOULD result in the maximal outcome. However we have already said that this intention is ineffective and therefore this outcome cannot come true – leading to the moral judgement that PP ought not to accept the invite.
Now, if the case is based on the concept that this intention is necessarily ineffective for PP then the logic holds, but it is not true to life. Not all ineffective intentions are necessarily so. In fact I would imagine that hardly any are (only genuine intentions to do the impossible or those that are logically inconsistent, the intentions of madmen).
If the parameters of the case are that the intention is actually but only accidently ineffective, then we admit the possibility that it could have been effective. This seems circular as the case relies on the concept that the intention is ineffective.
This is what I have meant above when I say that we are either approaching the logic here after the fact, or that this kind of morality feels useless! If we can only judge moral worth of an event a) in full possession of the facts and b) after the moral event has occurred, then I don’t see the value here. Straightforward consequentialism is perhaps ok as it just says that we need full information for a fully accurate moral judgement. Practically difficult, maybe, but we can strive to achieve it. With a concept of ineffective intentions, though, do we also need presience?
Thanks Doug. (T2) sounds right to me, given that we're talking about the set of intentions that are best in respect of the acts they effect, excluding those that are good merely for state-based reasons (cf. toxin puzzle).
ReplyDeleteMatt - we're talking about the 'objective' (God's eye view) sense of 'ought' here. This is simpler for various theoretical purposes. But the one that really matters for action-guidance is the 'subjective' (better, 'rational' or evidence-relative) sense of ought. People can often have good grounds for expecting their intentions to prove effective, so there's no barrier here to them making rational choices by consequentialist lights.
As for your earlier comment (and shifting to the 'rational' ought, which seems to be more your interest), it's just not true that "we cannot apply any culpability for φ to choices at φ-n as φ does not necessarily entail from those steps." It is a bad idea to increase the risk of bad outcomes. A violent drunk shouldn't start drinking, even though this doesn't strictly entail that he'll later choose to act violently.
Richard,
ReplyDeleteThat's a fair point. I've definitely got problems with the 'God's eye' view taken here though. I need to think through exactly what they are but I'm just not sure I can make sense of even applying an "ought" judgement to something when you have a (by definition) objective or completely knowledgeable viewpoint of it. I'm too much of a determinist at heart.
I think the existentialist sort of description I set out earlier makes the most sense to me (although this is neutral as to what sort of morality - e.g. consequentalism - to lay over the top) and eventually we come to the same conclusion on the rationality of expected effective intentions, combined with moral culpability, anyway.
Hi Richard,
ReplyDeleteI've reflected, and I don't see any way to deny your Route B. It's very compelling. Thanks for setting me straight. I now have some revising to do. I'm not saying that I accept T2 and everything else that you've said. I think that there may still be some differences between us. But I've come a lot closer to a position like yours, which, by the way, isn't what I would call actualism. What I call actualism is T1. Your position seems closer to Goldman's (now Smith) version of possibilism than it does to actualism. And I've moved to very close to something like that position with modifications stemming from some insights that I've got from you. Of course, I've got a lot more thinking to do before I start revising. I want to get it right this time. But, again, thanks for your comments and the very helpful ensuing discussion.
Best, Doug
For my own part, I think there is definitely a problem for me with the ‘criterial’ sense of normativity that you set out in your linked post Richard. This is what leads me to say that the logic is backwards.
ReplyDeleteWe are taking an objective or ‘God’s eye’ view of a situation, and we are specifying some known outcomes in order to judge an input normatively. So we can say “If agent does φ at t1 then X will happen at t2, else it will not.”
From this perspective of complete knowledge as so defined, how does it make any sense to query what ‘ought’ to happen at t1? Is not the only question we can ask here “what does happen at t1,” that is, “what does/will/did the agent do?” If we have complete knowledge then we don’t admit for any variance in the action at t1 and cannot ask a normative question. We already know what happens. ‘Ought’ is meaningless.
If we don’t have knowledge of t1, and would like to say that there is a genuine way that the agent could make one choice or another, then there is a genuine normative question to ask. But this isn’t a criterial normativity any more, it’s just another case of judging “according to the available evidence”.
Perhaps it is very good evidence in this case (knowledge of the future!) but I’m not sure if I can understand why this isn’t now just a special sort of epistemic, ‘rational’ question.
Richard, please feel free to delete or move this comment to the linked post if you feel it’s derailing the topic too much here. More on topic, I still need to do some thinking about the violent drunk scenario as there are questions here about reducing or influencing one’s own agency which seem to be at the heart of the problem for me. - Matt
Doug - you're most welcome! It's been a fun and fruitful discussion.
ReplyDeleteLabels don't intrinsically matter, but I still think your conception of 'actualism' fails to cut at the joints (or accurately reflect how self-identified "actualists" think of their view). As I've previously argued, we do better to follow Jackson & Pargetter in understanding Actualism as fundamentally a view about the evaluation of options, as reflected in the claim that Tony has better reason to perform iii than iv (in the linked example).
Matt - you're confusing your modalities. Just because someone does a certain act, doesn't mean that they couldn't have done otherwise. See my post 'A Future Without Fatalism'. (And note that we can make retrospective judgments of right- or wrongdoing in full knowledge of what actually happened. It isn't "meaningless" to say that Hitler ought not to have acted as he did.)
I suspect that your objection stems from confusing third-personal normative criticism with first-personal normative deliberation. If an agent already knows what she'll decide, then she can't continue to consider the deliberative question open. But that's a separate issue: it doesn't settle or preclude the critical question of whether she's deciding correctly or not.