Option Ranges

By Marjorie Ross,2014-11-26 17:13
19 views 0
Option Ranges

    Option Ranges

    Timothy Chappell, Open University


    An option range is a set of alternative actions available to an agent at a given time. I ask how a moral theory‟s account of option ranges relates to its recommendations about deliberative procedure (DP) and criterion of rightness (CR).

    I apply this question to Act Consequentialism (AC), which tells us, at any time, to perform the action with the best consequences in our option range then. If anyone can

    employ this command as a DP, or assess (direct or indirect) compliance with it as a CR, someone must be able to tell which actions fit this description. Since the denseness of possibilia entails that any option range is indefinitely large, no one can do this. So no one can know that any option has ever emerged from any range as the best option in that range. However we come to know that a given option is right, we never come to know it in AC‟s way.

    It is often observed that AC cannot give us a DP. AC cannot give us a CR either, unless we are omniscient. So Act Consequentialism is useless.


Option Ranges

    By [July 24] the condition of all the sailors was very bad... Dudley initiated

    discussion by saying there would have to be something done; his companions would

    know exactly what he meant... Dudley went on to argue that it was better to kill one

    than for all to die. Both Brooks and Stephens replied, „We shall see a ship tomorrow‟.

    Dudley persisted and said they would have to draw lots. Brooks [refused]... During

    the night [while Brooks was steering,] Dudley held a conversation with Stephens...

    Dudley said, „What is to be done? I believe the [cabin] boy is dying. You have a wife

    and five children, and I have a wife and three children. Human flesh has been eaten [1]before.‟ Stephens replied, „See what daylight brings forth.‟

    Sophie, with an inanity poised on her tongue and choked with fear, was about to

    attempt a reply when the doctor said, „You may keep one of your children.‟

    Bitte?‟ said Sophie.

    „You may keep one of your children,‟ he repeated. „The other one will have to go.

    Which one will you keep?‟

    „You mean, I have to choose?‟

    „You‟re a Polack, not a Yid. That gives you a privilege—a choice.‟

    Her thought processes dwindled, crumpled. Then she felt her legs crumple. „I can‟t

    choose! I can‟t choose!‟ she began to scream... The doctor was aware of unwanted

    attention. „Shut up!‟ he ordered. „Hurry up now and choose. Choose, goddamn it, or [2]I‟ll send both of them over there. Quick!‟


    When an agent performs a voluntary action, she chooses that action from an option range: from a set of usually incompatible alternative courses of action any one of which is or seems possible for her at the time of choice.

    In my first epigraph, Dudley and his unfortunate shipmates share an option range including (1) drawing lots and killing the loser to eat him, (2) killing the weakest sailor, the cabin boy, to eat him, (3) waiting for the cabin boy to die and then eating him, and (4) eschewing cannibalism and waiting for rescue or death. In my second epigraph, Sophie‟s option range includes (1) sending her child Eva to the gas chamber and sparing her child Jan, (2) sparing Eva and not Jan, (3) saying “I will not choose” and losing them both, (4) saying “Spare them and send me”, and (5) spitting in the Nazi doctor‟s face and telling him to go to hell.


    Clearly there are plenty of reasons why option ranges are philosophically interesting. One relation worth investigating is that between objective and subjective

    option rangesbetween the option range that the agent actually has, and the option range that the agent believes she has (or, differently, the option range that she considers). Another interesting relation is that between option ranges and moral theory. What option ranges should an agent consider? What, if anything, should our

    criterion of rightness say about option ranges?

    For obvious reasons the first relation, between objective and subjective option ranges, had better not be identity. First, any objective option range contains too many options to make deliberation over all of its contents practicable. Second, it looks like a mark of bad character for me to consideror even noticesome of the options in

    some of my objective option ranges. As Bernard Williams puts it:

    One does not feel easy with the man who in the course of a discussion of how

    to deal with political or business rivals says, „Of course, we could have them

    killed, but we should lay that aside right from the beginning.‟ It should never

    have come into his hands to be laid aside. It is characteristic of morality that it

    tends to overlook the possibility that some concerns are best embodied... in [3] deliberative silence.

    In ordinary English we have the locution not an option, meaning that some

    objective option is not a serious option: it deserves “deliberative silence”. What

    Williams calls “morality” (or “the morality system”) is a family of ways of ethical thinking that seem incapable of deliberative silence. The morality system says that, if any option is ruled out, it must be ruled out as the result of a deliberative calculation,

    not in advance of any deliberative calculation (as Williams thinks should sometimes happen). If it is true that Dudley and his shipmates should not engage in murder and cannibalism, or that Sophie should reject absolutely any option that involves nominating either or both of her children for murder, these truths are, for the morality system, the termini and not the starting points of deliberation.

    Of course the morality system‟s defenders might reject Williams‟ claim that it

    is bad to be incapable of deliberative silence. Even if they don‟t, they can respond with a distinction which makes room for Williams‟ points. They can agree that some concerns are best embodied in deliberative silence, but deny that any concerns

    deserve justificatory silence. Options are sometimes objectively available that agents of good character will not consider seriously in their deliberations, or will even fail to notice. That does not imply that these options are absent from the full justification of the good agent‟s actions; nor even absent from the full justification of the good agent‟s deliberations. The morality system‟s defenders can respond that while the

    correct deliberative procedure (DP) is silent about many morally atrocious options, [4]the correct criterion of rightness (CR) is never silent about any option.

    This response to Williams seems right. However, it also suggests that the morality system appeals to comprehensiveness at the level of the CR to support selectiveness at the level of the DP. While I do not share Williams‟s hostility to all the various things that he labels as parts of the morality system, I do think that any version of this appeal must fail. To argue this, I here consider a theory of morality which makes the relation between its account of option ranges and its criterion of


    rightness particularly intimate, and which makes the claim of comprehensiveness at the level of the CR particularly strongly. This theory is Act Consequentialism.


    Act Consequentialism (AC) tells us, at any given time, to perform the best action in our option range. (Readers who object to this formulation of AC should consult sections III-IV.) It is well known that AC is unworkable, or at least [5] It is less cumbersome, as a DP: Bentham and J.S.Mill themselves make the point.

    widely recognised that the same kind of problem that makes AC unworkable as a DP, [6]also make AC unworkable as a CR.

    The familiar problem for AC as a DP is this: Option ranges are so large that finitely intelligent agents cannot use deliberation about all the available options as a

    feasible DP. The less familiar problem for AC as a CR is parallel: Option ranges are so large that finitely intelligent agents cannot know the results of comprehensive surveys of all the available options for bestness. Thus finitely intelligent agents can never know that a given action is “the action with the best consequences out of all

    those available at the time”. AC‟s criterion of rightness is therefore impossible for

    finitely intelligent agents like us to employ. I leave it to the Wittgensteinians to tell us whether a criterion that no one can employ is a useless criterion, or is not a criterion at all.


    Before stating the main argument formally, I discard four irrelevances. 1. It doesn‟t matter whether, strictly, we regard AC as telling us to perform the action with the best consequences, or as telling us to perform any action with consequences at least as good as those of any other available option, or as telling us to perform any

    action with consequences acceptably close in goodness to the best available [7]consequence or set of consequences (= to “satisfice”). AC still needs to know which

    option is best if it is to evaluate options by these criteria. So AC in these formulations still needs what AC cannot have: access to the results of comprehensive surveys of our options for bestness.

    2. It doesn‟t matter whether AC admits supererogation. Forms of AC that admit supererogation presumably don‟t admit it in every case. Where AC does not admit supererogation, the action that AC says is right is still selected by its being the action with the best consequences. Where AC does admit supererogation, the action (or actions) that AC says we are free not to do are still selected by its (their) being the action with the best consequences (or the class of all actions with better consequences than the least good action it is permissible for us to do).

    3. It doesn‟t matter whether AC interprets “wrong action” as “action deserving of blame or censure” or as “action such that it is optimific to blame or censure it”. The

    latter account of what we should blame depends on a prior account of what is optimific; so it only postpones our question about option ranges.


    4. It doesn‟t matter whether there is a clear distinction between actions and their consequences. If there is such a distinction (as I think), I am discussing AC‟s claim that we should perform the action with the best consequences. If there is no such [8]distinction (as Jonathan Bennett thinks), I am discussing AC‟s claim that we should

    take the best option.


    Some critics have doubted that AC (in its basic form) does tell us to perform the action with the best consequences (or: the best option). I briefly present confirmation that (modulo irrelevances 1-4 in III) AC really does tell us to perform the action with the best consequences. Here are two diagnostic quotations, with italics added:

    [A] consequentialist theory... tells us that we ought to do whatever has the best

    consequences... [More formally,] the consequentialist holds that... the proper

    way for an agent to respond to any values recognised is... in every choice to

    select the option with prognoses that mean it is the best [i.e. the most probably [9]successful/ highest scoring] gamble with those values.

    Suppose I then begin to think ethically... I now have to take into account the

    interests of all those affected by my decision. This requires me to weigh up all

    those interests and adopt the course of action most likely to maximise the [10]interests of those affected. Thus at least at some level in my moral

    reasoning I must choose the course of action that has the best consequences,

    on balance, for all affected.

    [11](Peter Singer, Practical Ethics (Cambridge: Cambridge UP 1993), p.13)

    Some critics, confronted with statements of AC like these, give me the puzzling advice “not to take them too literally”. I have no idea how else to take them.


    My argument is this:

    1. All option ranges are indefinitely large.

    2. So finitely intelligent agents cannot know the results of comprehensive

    surveys for bestness of all the available options.

    3. So finitely intelligent agents cannot ever know that a given action is the action

    with the best consequences out of all those available at the time. 4. So finitely intelligent agents can never apply AC‟s CR.

    5. So AC is an empty theory.


    Premiss (1) I call the Denseness Claim, because its truth follows simply from something that seems undeniable, the denseness of possibilia: between every two

    similar possibilities P and P*, there is a third possibility P** which (on some plausible similarity metric) is more similar to P than P* is, and more similar to P* than P is. So even when someone‟s range of options is severely restricted, there are still indefinitely many ways in which they can choose to act if they can choose to act at all. A paralysed man who can only move one finger, and that to a minimal extent, has (it might be claimed) exactly two options: to move it or not. However, actions are not reducible to physical movements. There are indefinitely many things that the paralysed man might be (in the middle of) doing by moving his finger, e.g. spending a [12] month tapping out a Morse translation of The Iliad to an amanuensis.

    Here the consequentialist might suggest, against Denseness, that even if it never happens in the actual world, still there are possible worlds where agents have

    (say) two and only two options, which can therefore be ranked for bestness. The difference between such possible worlds and our own, it will then be claimed, is only a matter of degree.

    How, without question-begging, can the consequentialist guarantee that there are any possible worlds where agents have (say) two and only two options? Not at any rate by proposing that there are possible worlds where agents sometimes have available to them two and only two physical movements. Since (as the Morse-Homer case shows) actions are irreducible to physical movements, a possible world in which only two movements are available to an agent at any time is not a possible world in

    which only two actions are available to the agent at any time.

    Denseness should not, by the way, be confused with a quite different non-[13]consequentialist claim, Escapability. Escapability says that there is always a way