First consider why one might initially think that the (fitting) consequentialist agent would use EV. Presumably the thought is something like this: There's an isomorphism of sorts between moral facts and fitting mindsets (e.g., it's fitting to desire just what's good/desirable); the facts about what one ought (rationally) to do, according to consequentialism, are settled by the expected values of one's options; so it's rationally fitting to choose what to do by calculating the expected values of one's options.
I think that this rests on an overly simple view of the isomorphism between the moral facts and fitting mindsets. It's true that there's a straightforward isomorphism between the goods posited by a theory and what desires, or ultimate ends, are thereby shown to be fitting. But what about our capacities for "instrumental rationality", which take our ultimate desires as inputs, and -- guided by our available evidence -- yield concrete intentions or actions as outputs? Why think that our moral theories have any particular implications for these operations? On the contrary, I propose that we can give an independent (morally neutral) account of "instrumental rationality", with the upshot that fitting consequentialists aren't saddled with the EV decision procedure after all.
We may begin with the normal competence condition for instrumental rationality (in non-ideal agents): the dispositions that constitute our rational capacities are those that render us well-equipped to act in a wide variety of "normal" environments. This suggests the following feature list:
- Well-calibrated expectations (i.e. epistemic rationality)
- Well-allocated attentional resources (e.g. scanning for threats/opportunities)
- Well-calibrated predispositions (e.g. to avoid pain, be cooperative, help others in need, etc.)
- Executive faculty triggered when faced with novel or complex situations for which one’s predispositions are ill-equipped to handle (relative to one's ultimate ends)
The crucial observation underlying the above list is that "executive oversight" is an especially scarce resource in our cognitive economy, rendering conscious deliberation too slow to serve as our "default" mode of decision-making in normal circumstances. (There are also more principled philosophical obstacles, e.g. the regress problem inherent in "deliberating whether to deliberate", etc.) Instead, an instrumentally rational (normally competent) human-like agent must by default be guided by generally reliable sub-personal "predispositions" to act directly upon registering pertinent information, only triggering conscious deliberative oversight in those odd circumstances when one's sub-personal mechanisms aren't up to the task.
On this view, the fitting (human, non-ideal) agent rarely acts upon explicit deliberation at all, let alone explicit EV calculations. This is so even when we plug in impartial consequentialist values as the "ultimate goals" at which this instrumentally rational agent aims. Furthermore, even when conscious deliberation is triggered, the evidence that we are unreliable at EV calculations precludes us from accepting their verdicts too hastily and uncritically (especially when the verdicts are at odds with more reliable rules of thumb, e.g. against torture, harming innocents, etc.).
Question: My above distinction between "morally neutral" instrumental rationality and "morally determined" ultimate ends seems well-suited to consequentialist theories. But does this approach seem appropriate for deontological theories also? Should fitting deontologists be understood as simply having certain constraints ("don't lie", etc.) among their ultimate ends? Or are deontological constraints better understood as mirrored in the "decision procedure" that converts the agent's aims into actions -- providing, in effect, an alternative to standard "instrumental rationality"?
I suspect that the ethically relevant decisions made by military commanders will be made using explicit consequentialist type calculations, performed swiftly by dint of practice. However, other military virtues such as obedience/duty constrain how far this can be taken. One review I could find was
ReplyDeletehere. Double effect type reasoning also seems popular.