Timothy Chappell, Open University
An option range is a set of alternative actions available to an agent at a given time. I ask how a moral theory‟s account of option ranges relates to its recommendations about deliberative procedure (DP) and criterion of rightness (CR).
I apply this question to Act Consequentialism (AC), which tells us, at any time, to perform the action with the best consequences in our option range then. If anyone can
employ this command as a DP, or assess (direct or indirect) compliance with it as a CR, someone must be able to tell which actions fit this description. Since the denseness of possibilia entails that any option range is indefinitely large, no one can do this. So no one can know that any option has ever emerged from any range as the best option in that range. However we come to know that a given option is right, we never come to know it in AC‟s way.
It is often observed that AC cannot give us a DP. AC cannot give us a CR either, unless we are omniscient. So Act Consequentialism is useless.
By [July 24] the condition of all the sailors was very bad... Dudley initiated
discussion by saying there would have to be something done; his companions would
know exactly what he meant... Dudley went on to argue that it was better to kill one
than for all to die. Both Brooks and Stephens replied, „We shall see a ship tomorrow‟.
Dudley persisted and said they would have to draw lots. Brooks [refused]... During
the night [while Brooks was steering,] Dudley held a conversation with Stephens...
Dudley said, „What is to be done? I believe the [cabin] boy is dying. You have a wife
and five children, and I have a wife and three children. Human flesh has been eaten before.‟ Stephens replied, „See what daylight brings forth.‟
Sophie, with an inanity poised on her tongue and choked with fear, was about to
attempt a reply when the doctor said, „You may keep one of your children.‟
„Bitte?‟ said Sophie.
„You may keep one of your children,‟ he repeated. „The other one will have to go.
Which one will you keep?‟
„You mean, I have to choose?‟
„You‟re a Polack, not a Yid. That gives you a privilege—a choice.‟
Her thought processes dwindled, crumpled. Then she felt her legs crumple. „I can‟t
choose! I can‟t choose!‟ she began to scream... The doctor was aware of unwanted
attention. „Shut up!‟ he ordered. „Hurry up now and choose. Choose, goddamn it, or I‟ll send both of them over there. Quick!‟
When an agent performs a voluntary action, she chooses that action from an option range: from a set of usually incompatible alternative courses of action any one of which is or seems possible for her at the time of choice.
In my first epigraph, Dudley and his unfortunate shipmates share an option range including (1) drawing lots and killing the loser to eat him, (2) killing the weakest sailor, the cabin boy, to eat him, (3) waiting for the cabin boy to die and then eating him, and (4) eschewing cannibalism and waiting for rescue or death. In my second epigraph, Sophie‟s option range includes (1) sending her child Eva to the gas chamber and sparing her child Jan, (2) sparing Eva and not Jan, (3) saying “I will not choose” and losing them both, (4) saying “Spare them and send me”, and (5) spitting in the Nazi doctor‟s face and telling him to go to hell.
Clearly there are plenty of reasons why option ranges are philosophically interesting. One relation worth investigating is that between objective and subjective
option ranges—between the option range that the agent actually has, and the option range that the agent believes she has (or, differently, the option range that she considers). Another interesting relation is that between option ranges and moral theory. What option ranges should an agent consider? What, if anything, should our
criterion of rightness say about option ranges?
For obvious reasons the first relation, between objective and subjective option ranges, had better not be identity. First, any objective option range contains too many options to make deliberation over all of its contents practicable. Second, it looks like a mark of bad character for me to consider—or even notice—some of the options in
some of my objective option ranges. As Bernard Williams puts it:
One does not feel easy with the man who in the course of a discussion of how
to deal with political or business rivals says, „Of course, we could have them
killed, but we should lay that aside right from the beginning.‟ It should never
have come into his hands to be laid aside. It is characteristic of morality that it
tends to overlook the possibility that some concerns are best embodied... in  deliberative silence.
In ordinary English we have the locution not an option, meaning that some
objective option is not a serious option: it deserves “deliberative silence”. What
Williams calls “morality” (or “the morality system”) is a family of ways of ethical thinking that seem incapable of deliberative silence. The morality system says that, if any option is ruled out, it must be ruled out as the result of a deliberative calculation,
not in advance of any deliberative calculation (as Williams thinks should sometimes happen). If it is true that Dudley and his shipmates should not engage in murder and cannibalism, or that Sophie should reject absolutely any option that involves nominating either or both of her children for murder, these truths are, for the morality system, the termini and not the starting points of deliberation.
Of course the morality system‟s defenders might reject Williams‟ claim that it
is bad to be incapable of deliberative silence. Even if they don‟t, they can respond with a distinction which makes room for Williams‟ points. They can agree that some concerns are best embodied in deliberative silence, but deny that any concerns
deserve justificatory silence. Options are sometimes objectively available that agents of good character will not consider seriously in their deliberations, or will even fail to notice. That does not imply that these options are absent from the full justification of the good agent‟s actions; nor even absent from the full justification of the good agent‟s deliberations. The morality system‟s defenders can respond that while the
correct deliberative procedure (DP) is silent about many morally atrocious options, the correct criterion of rightness (CR) is never silent about any option.
This response to Williams seems right. However, it also suggests that the morality system appeals to comprehensiveness at the level of the CR to support selectiveness at the level of the DP. While I do not share Williams‟s hostility to all the various things that he labels as parts of the morality system, I do think that any version of this appeal must fail. To argue this, I here consider a theory of morality which makes the relation between its account of option ranges and its criterion of
rightness particularly intimate, and which makes the claim of comprehensiveness at the level of the CR particularly strongly. This theory is Act Consequentialism.
Act Consequentialism (AC) tells us, at any given time, to perform the best action in our option range. (Readers who object to this formulation of AC should consult sections III-IV.) It is well known that AC is unworkable, or at least  It is less cumbersome, as a DP: Bentham and J.S.Mill themselves make the point.
widely recognised that the same kind of problem that makes AC unworkable as a DP, also make AC unworkable as a CR.
The familiar problem for AC as a DP is this: Option ranges are so large that finitely intelligent agents cannot use deliberation about all the available options as a
feasible DP. The less familiar problem for AC as a CR is parallel: Option ranges are so large that finitely intelligent agents cannot know the results of comprehensive surveys of all the available options for bestness. Thus finitely intelligent agents can never know that a given action is “the action with the best consequences out of all
those available at the time”. AC‟s criterion of rightness is therefore impossible for
finitely intelligent agents like us to employ. I leave it to the Wittgensteinians to tell us whether a criterion that no one can employ is a useless criterion, or is not a criterion at all.
Before stating the main argument formally, I discard four irrelevances. 1. It doesn‟t matter whether, strictly, we regard AC as telling us to perform the action with the best consequences, or as telling us to perform any action with consequences at least as good as those of any other available option, or as telling us to perform any
action with consequences acceptably close in goodness to the best available consequence or set of consequences (= to “satisfice”). AC still needs to know which
option is best if it is to evaluate options by these criteria. So AC in these formulations still needs what AC cannot have: access to the results of comprehensive surveys of our options for bestness.
2. It doesn‟t matter whether AC admits supererogation. Forms of AC that admit supererogation presumably don‟t admit it in every case. Where AC does not admit supererogation, the action that AC says is right is still selected by its being the action with the best consequences. Where AC does admit supererogation, the action (or actions) that AC says we are free not to do are still selected by its (their) being the action with the best consequences (or the class of all actions with better consequences than the least good action it is permissible for us to do).
3. It doesn‟t matter whether AC interprets “wrong action” as “action deserving of blame or censure” or as “action such that it is optimific to blame or censure it”. The
latter account of what we should blame depends on a prior account of what is optimific; so it only postpones our question about option ranges.
4. It doesn‟t matter whether there is a clear distinction between actions and their consequences. If there is such a distinction (as I think), I am discussing AC‟s claim that we should perform the action with the best consequences. If there is no such distinction (as Jonathan Bennett thinks), I am discussing AC‟s claim that we should
take the best option.
Some critics have doubted that AC (in its basic form) does tell us to perform the action with the best consequences (or: the best option). I briefly present confirmation that (modulo irrelevances 1-4 in III) AC really does tell us to perform the action with the best consequences. Here are two diagnostic quotations, with italics added:
[A] consequentialist theory... tells us that we ought to do whatever has the best
consequences... [More formally,] the consequentialist holds that... the proper
way for an agent to respond to any values recognised is... in every choice to
select the option with prognoses that mean it is the best [i.e. the most probably successful/ highest scoring] gamble with those values.
Suppose I then begin to think ethically... I now have to take into account the
interests of all those affected by my decision. This requires me to weigh up all
those interests and adopt the course of action most likely to maximise the interests of those affected. Thus at least at some level in my moral
reasoning I must choose the course of action that has the best consequences,
on balance, for all affected.
(Peter Singer, Practical Ethics (Cambridge: Cambridge UP 1993), p.13)
Some critics, confronted with statements of AC like these, give me the puzzling advice “not to take them too literally”. I have no idea how else to take them.
My argument is this:
1. All option ranges are indefinitely large.
2. So finitely intelligent agents cannot know the results of comprehensive
surveys for bestness of all the available options.
3. So finitely intelligent agents cannot ever know that a given action is the action
with the best consequences out of all those available at the time. 4. So finitely intelligent agents can never apply AC‟s CR.
5. So AC is an empty theory.
Premiss (1) I call the Denseness Claim, because its truth follows simply from something that seems undeniable, the denseness of possibilia: between every two
similar possibilities P and P*, there is a third possibility P** which (on some plausible similarity metric) is more similar to P than P* is, and more similar to P* than P is. So even when someone‟s range of options is severely restricted, there are still indefinitely many ways in which they can choose to act if they can choose to act at all. A paralysed man who can only move one finger, and that to a minimal extent, has (it might be claimed) exactly two options: to move it or not. However, actions are not reducible to physical movements. There are indefinitely many things that the paralysed man might be (in the middle of) doing by moving his finger, e.g. spending a  month tapping out a Morse translation of The Iliad to an amanuensis.
Here the consequentialist might suggest, against Denseness, that even if it never happens in the actual world, still there are possible worlds where agents have
(say) two and only two options, which can therefore be ranked for bestness. The difference between such possible worlds and our own, it will then be claimed, is only a matter of degree.
How, without question-begging, can the consequentialist guarantee that there are any possible worlds where agents have (say) two and only two options? Not at any rate by proposing that there are possible worlds where agents sometimes have available to them two and only two physical movements. Since (as the Morse-Homer case shows) actions are irreducible to physical movements, a possible world in which only two movements are available to an agent at any time is not a possible world in
which only two actions are available to the agent at any time.
Denseness should not, by the way, be confused with a quite different non-consequentialist claim, Escapability. Escapability says that there is always a way
out of any moral dilemma, because it is never true that any agents‟ options are only Forbidden Option A or Forbidden Option B. Denseness could be true but
Escapability false: Denseness allows that there could be an agent whose options are only Forbidden Option A (performed in indefinitely many ways) or Forbidden Option B (performed in indefinitely many ways): or indeed just Forbidden A and its variants. Non-consequentialism is not incoherent without Escapability: there is no logical bar
on saying that sometimes every available option is wrong, even if this invites the consequentialist rejoinder that, in that case, you might as well stop worrying about wrongness and think about bestness instead. Still, many non-consequentialists would like to establish Escapability. The way to do it is not by deploying Denseness, but rather (I expect) the doctrine of acts and omissions.
Suppose that we have established premiss (1). A second objection to my argument is that (1) does not entail (2): “the indefinite largeness of all option ranges does not stop finitely intelligent agents from knowing that a given action is the action with the best consequences out of all those available at the time: no more than the
denseness of a mathematical interval like a line stops us knowing where the end of the line is. So finitely intelligent agents can apply AC‟s CR after all.” Some critics, notably John Skorupski, have insisted that sometimes it is simply a matter of common sense to know what the best option is. To use Skorupski‟s own example: If this building has a bomb in it which will explode very soon, the best option is to leave immediately. It is nothing hard, Skorupski urges, to establish this. It is something that moderately intelligent agents will simply grasp at once. (Less than moderately intelligent agents, no doubt, will be less fortunate.)
I reply that we don‟t know that getting out of the building is the best thing to
do. What we know is that it is the right thing to do (given our present information).
Getting out of the building in Skorupski‟s bomb case is justified all right (again, given our present information). It is not justified in the way that AC outlines, by being
known (by anyone) to be literally the best thing to do—the action with the best
consequences out of all those available to us at the time. For reasons already given, that cannot be known. The indefinite variety of ways in which leaving the building immediately might not be the best thing to do in a bomb case is obvious (booby-
trapped exits, larger and more widely lethal bombs outside, etc.). The right action can sometimes be obvious to us; but it is never obviously right because it is obviously
best. To deal with the other analogy used in the objection: for any point on any line, it can be mathematically proved whether or not it is an end point. Nothing parallel holds regarding option ranges. For any option in any range, there is no proof whether or not
it is the best option in that range.
Still, the objection has identified a distinction between an immodest sense of “best” and a modest sense of “right”. Apparently we can reconstruct AC so that it is committed only to finding the right thing to do in this modest sense. Typical “sophisticated” (Railton‟s word) forms of AC that accept the CR/ DP distinction often say in effect that the DP they recommend is to look for “the right thing to do” in just this modest sense, and not worry about looking for “the action with the best
consequences out of all those available to us at the time”. (Perhaps this is what was
meant by telling me not to take the statements given in IV too literally.)
However, what in AC could justify the adoption of a modest DP? Only three answers suggest themselves. Either (a) that DP is justified by “common sense”; or (b) the justification of that DP is itself modest: adopting it is a right but not necessarily the best thing to do; or (c) the justification of that DP is a full-fledged justification of AC‟s original form: its adoption is justified because its adoption brings about the best consequences.
To opt for (a) is to abandon AC in favour of something called common sense. Three cheers for that, as far as it goes; although some reservations about how far it does go will be added in a minute.
Option (b) looks unstable. How, for AC, could it be acceptable to adopt a DP on the grounds that it is right but not necessarily best? Answers to this question fall into two camps. Either they reduce to an appeal to common sense, bringing us back to (a); or they turn out to be satisficing versions of AC, which for my argument‟s purposes are not importantly different from stronger versions of AC. They are no less
committed than stronger versions of AC, at the level of the CR, to the accessibility of comprehensive surveys of all the options in any option range.
Option (c) entails an onus to show that a comprehensive survey of all possible actions of adopting DPs picks out as consequentially best that action of adopting that DP that is in fact adopted. As before, this task is impossible.
Another way of opting for (c) is to say that what the rational agent ought to do is look, not for the best option tout court, but for the best option discernible within the
time available for deliberation. As John O‟Neill has put it in correspondence, “we search under time constraints and the flag may fall”. Here the difficulties about
finding the best option tout court reappear as difficulties about finding the best option discernible within the time available. (Notice also that to accept this time-constraint is not to decrease the number of options objectively available.) Indeed the difficulty now appears in two different dimensions: we have to hunt both for the best time-constraint-relative option, and for the best time-constraint for our option-hunting to be relative to.
Compare a proposal of Herbert Simon‟s:
There is no point in prescribing a particular substantively rational solution if
there exists no procedure for finding that solution with an acceptable
computing effort.... actual computation is infeasible for problems of any size
and complexity... Hence a theory of rationality for problems like the travelling
salesman problem is not a theory of best solutions—of substantive
rationality—but a theory of efficient computational procedures to find good solutions—a theory of procedural rationality.
Simon too is swapping a one-dimension for a two-dimension problem. Instead of telling agents to look for the best option tout court, Simon tells them to look for the
best option available at an acceptable computing cost. But a very good option will be
worth a very high computing cost; and we have no general way of telling which options are available that are worth any particular computing cost, high or low. Like
the proposal to impose (or recognise) time-constraints on deliberation, Simon‟s
proposal about assessing computation costs only multiplies the difficulties.
Quite generally, AC has no way of underwriting the rationality of commonsensical interpretations of “the best option”. Appeals to these interpretations are unjustifiable in AC‟s own terms. AC either uses such appeals, and then can give us no uniquely rational determinate practical verdicts; or it eschews such appeals, and then can give us no determinate practical verdicts at all.
My opponent might now move to talking about expected consequences in
developing his form of AC. Perhaps my characterisation of AC was wrong in a way not yet considered: all AC‟s CR tells us to do is perform the action with the best
expected consequences out of all those available to us at that time.
Here the relevant question is, of course, “Expected by whom?”. The question leads expectation-dependent AC to a fatal dilemma. Suppose, on the one hand, that AC says that the agent acts rightly iff he performs the action that he expects (or would
expect in a calm hour) to have the best consequences out of all those available at that
time. Then (a) even in a calm hour, how is an agent to run through every possible
action, to see which of them he expects to do best? Anyway, (b) the biconditional that AC offers is obviously false unless the agent is always right in his expectations. If an agent‟s expectations are even sometimes mistaken, that he should do what he expects
to have the best consequences cannot be a justifiable CR.
Suppose, on the other hand, that the agent is always right in his expectations.
Then (a) there is no chance of any actual, non-omniscient agent ever approximating this agent‟s deliberations, and (b) there is no point in even mentioning expectations. If the agent‟s expectations are idealised to the point of omniscience, we are back at the original formulation of AC in my first sentence, which referred not to best expected
consequences but simply to best consequences.
Does the problem only arise because I have not allowed AC any specification of how options are to be individuated? AC often wants to begin deliberation or justification with: “X‟s options here are A and B”. Against this I invoke Denseness,
and object that if X‟s options can be described as A and B, then they can also be described as “A done F-ly, A not done F-ly, and B”; or as “A done F-ly, A not done
F-ly, B done F-ly, and B not done F-ly”; or as “A done by doing P, A done by doing
Q, and B”... and so on at indefinite length. Why can‟t AC resist my argument by insisting, e.g., that in the above case we consider only A and B, and that all the other qualifications of these two options are just so much irrelevant detail?
The answer is, of course, that it cannot be known in advance that the qualifications are irrelevant detail: “Little differences between acts of little . Sometimes such qualifications make no significance can make a moral difference”
difference (so far as I know it does not matter whether I put this clause within brackets, or use a dash or colon before it instead). Sometimes they do. Among the indefinitely many ways of moving one‟s fingers, some of them count as parts of
Morse renditions of Homer, and so as different options from those describable simply as “moving one‟s fingers”. Among the indefinitely many ways of saluting the flag, some of them are clearly parodic or sarcastic, and therefore constitute different options from acts of saluting it sincerely.
No finite account of the individuation of options is guaranteed to capture every difference that might conceivably be morally relevant, and ignore every difference that could not be morally relevant. AC has never offered such an account; and I see no hope that it will.
No such account is offered, e.g., by Jackson and Pargetter, who think that “the
option approach” “can hardly be supposed [to be] totally misconceived”, in their
“Oughts, Options, and Actualism”, Philosophical Review 1986. They suggest that “If
you want the answer as to what an agent ought to do at or during some time, look at all the maximally relevantly specific actions possible at or during that time”: but they
say nothing about how to identify such actions.
Compare Hector-Neri Castañeda‟s “Ought, value, and utilitarianism”
(American Philosophical Quarterly 1969), which begs the present question in the
following sentence: “Let us call utilitarian actions the alternative actions in whose
comparisons of value utilitarianism is interested” (p.260, my underlining).
Nor is such an account discernible in Lars Bergstrom, “Utilitarianism and alternative actions”, Nous 1971 (see also Bergstrom‟s book, The Alternatives and
Consequences of Actions, Stockholm 1966). Bergstrom defines a technical notion of an „alternative set‟ and argues that there are various ways in which such sets could be fixed; but he admits that it is hard to avoid going normative in fixing them: for my purposes, a crucial admission. He confidently tells us that he doesn‟t think that these problems defeat his version of AC. Surprisingly, he doesn‟t say why he is so confident about this.
Dag Prawitz, in “The alternatives to an action”, Theoria 1970, attempts to
show how AC could avoid going normative about selecting option ranges. He thinks that the definition of „alternative‟ should be: “b is an alternative to a iff a and b are different actions that are agent-identical, performable, and incompatible”. This is nice,
but does not help AC against the present argument.
Another popular proposal that does not help is that we should individuate options in the way that will bring about the best consequences. To identify this best
way of individuating options, we would have to be able to compare all possible ways of individuating options. (Nor, for reasons given in VII, can AC propose that we should individuate options in the way that we expect will bring about the best
Nor can AC appeal to past experience (as Mill tries to do in the quotation in ), or to common sense. Past experience won‟t do it, because the proposal Note 2
would have to be “Individuate options in the way that worked best in the past”. First, there wasn‟t any one particular way in which “we” individuated options in the past, there were lots of ways. Second, the proposal assumes that we know which future experiences to subsume under which past generalisations, which we don‟t. (Even if
we had optimific rules saying, e.g., “If a case is E-type do X, if a case is F-type do Y”,
no such rule would cover a case that was both E-type and F-type. It is pointless to
reply that we could always make such a rule, because the argument iterates.) Third,
many of our past ways of individuating options were highly anti-consequentialist in their form and presuppositions. Fourth, we cannot without question-begging assume that we have any information about which past ways of individuating options worked best, i.e. optimally, in the past. Unless we already know what counts as best, we know onlyB at mostB which of them worked well: which is no use to AC.
As for common sense: „Common sense‟ is not the name of any determinate
body of opinion. Even at the crudest pre-theoretical level, therefore, common sense