Chapter 6: Decision Making and Risk:Certainty Equivalents and Utility
Decision Making Using Certainty Equivalents:
The certainty equivalent (CE) is the payoff amount we would accept in lieu of under- going the uncertain situation.
Shirley Smart would pay $25 to insure her 1983 Toyota against total theft loss. CE = – $25.
For $1,000, Willy B. Rich would sell his Far- Fetched Lottery rights. CE = $1,000. Lucky Chance would pay somebody $100 to fill her Far-Fetched Lottery contract. CE =
A situation’s risk premium (RP) is the difference its between expected payoff (EP) and certainty equivalent (CE):
Shirley Smart’s car is worth $1,000 and there is a 1% chance of its being stolen. Thus, going without insurance has:
EP = (– $1,000)(.01) + ($0)(.99) = – $10 RP = – $10 – (– $25) = $15
Playing the Far-Fetched Lottery has EP = $2,500. Thus, For Willy B. Rich,
RP = EP – CE = $2,5000 – ($1,000) = $1,500
For Lucky Chance, RP = EP – CE = $2,5000 – (– $100) = $2,600
Different people will have different CEs and RPs for the same circumstance.
They have different attitudes toward risk.
People with positive RPs are risk averters.
Lucky Chance has greater risk aversion than Willy B. Rich, as reflected by her greater RP. We cannot compare Shirley’s risk aversion to the others’ because circumstances differ.
Risk averse persons have RPs that increase:
When the downside amounts become greater. Or when the chance of downside increases. A risk seeker will have negative RP. A risk neutral person has zero RP.
Maximizing Certainty Equivalent
A plausible axiom: Decision makers will prefer the act yielding greatest certainty equivalent.
A logical conclusion: The ideal decision criterion is to maximize certainty equivalent. Doing so guarantees taking
the preferred action. But CEs are difficult to determine. One approach is to discount the EPs.
RP = EP – CE implies that CE = EP – RP.
Ponderosa Records president has the
following risk premiums, keyed to the
These were found by extrapolating
from three equivalencies (white boxes).
Exact amounts are unknowable, but
these values seem to fit his risk profile.
How Good is the Analysis?
This result is different from that of ordinary back
folding (Bayes decision rule).
It specifically reflects underlying risk aversion.
The result must be correct if CEs are right.
The major weakness is the ad hoc manner for getting
the RPs, and hence the CEs.
Many assumptions are made in extrapolating to
get the table of RPs.
There is a cleaner way to achieve the same thing using
Consider a set of outcomes, O, O, ..., O. The following assumptions are made: 12n
Preference ranking can be done.
Transitivity of preference: A is preferred to B and B to C, then A must be preferred to C.
Continuity: Consider O. Take a gamble between two more extreme outcomes; winning yields O and betweenbest
losing O. There is a win probability q making you indifferent between getting O and gambling. Such a worstbetweengamble is called a reference lottery.
e.g., +$1,000 v. Far-Fetched Lottery, you pick q.
For Willy B. Rich, q = .5. (His CE was = +$1,000.)
For Lucky Chance, q = .9.
If the win probability were .99, would you risk +$1,000 to gamble? What is your q?
Substitutability: In a decision structure, you would willingly substitute for any outcome a gamble equally
One outcome on Lucky Chance’s tree is +$1,000; she would accept substituting for it the Far-Fetched
Lottery gamble with .9 win probability.
Utility Assumptions and Values
makes any reference lottery more preferred. Increasing preference: Raising q
Anybody would prefer the revised Far-Fetched Lottery when two coins are tossed and just one head will
win the $10,000. (The win probability goes from .5 to .75.) You still might not like that gamble!
Outcomes can be assigned utility values arbitrarily, so that the more preferred always gets the greater value:
u(O) = 10 u(O)=0 u(O)=50 bestworstbetween
Willy has u(+$10,000) = 500, uu(+$1,000) = 250. These are his values only.
Lucky has different values: u(+$10,000) = 50, uu(+$1,000) = 35.1. ooLike temperature, where 0 and 100 are different states on Celsius and Fahrenheit scales, so utility scales may
differ. ooThe freezing point for water is 0 C and the boiling point 100 C. In-between states will have values in that
range, and hotter days will have greater temperature values than cooler.
So, too, with utility values. They will fall into the range defined by the extreme outcomes, O and O. worstbest
More preferred outcomes will have greater utilities
A reference lottery can be used to find the utility for an outcome O by: between
First, establish an indifference win probability q making it equally preferred to the gamble: between
O with probability q and O with probability 1 - q bestbetweenworstbetween
Second, compute the lottery’s expected utility: u(O)(q) + u(O)(1 - q) The above is bestbetweenworstbetween
The indifference q plays a role analogous to the thermometer, reading attitude towards the outcome similarly to measuring temperature.
Utility values assigned to monetary outcomes constitute a utility function.
From a few points we may graph the utility function and apply it over a monetary range.
Those points may be obtained from an interview posing hypothetical gambles.
Using u(+$10,000)=100 and u(-$5,000)=0 Shirley Smart gave the following equivalencies: A: +$10,000 @ q v -$5,000 ? +$1,000 if q =.70 A A
B: +$10,000 @ q v +$1,000 ? +$5,000 if q =.75 B B
C1: +$1,000 @ q v -$5,000 ? -$500 if q =.70 C1C1
C2: +$1,000 @ q v -$5,000 ? -$2,000 if q =.30 C2C2
Shirley’s utilities for the equivalent amounts are equal to the respective expected utilities:
u(+$1,000) = u(+$10,000)(.70) + u(-$1,000)(1-.70) = 100(.70) + 0(1-.70) = 70 u(+$5,000) = u(+$10,000)(.75) + u(+$1,000)(1-.75) = 100(.75) + 70(1 - .75) = 92.5 u(-$500) = u(+$1,000)(.70) + u(-$5,000)(1-.70) = 70(.70) + 0(1 - .70) = 49 u(-$2,000) = u(+$1,000)(.30) + u(-$5,000)(1-.30) = 70(.30) + 0(1 - .30) = 21 Altogether, Shirley gave 6 points, plotted on the following graph. The smoothed curve fitting through them defines
her utility function.
Using the Utility Function
Read the utility payoffs corresponding to the net monetary
Apply the Bayes decision rule, with either:
A utility payoff table, computing the expected payoff
Or a decision tree, folding it back.
The certainty equivalent amount for any act or node may be
found from the expected utility by reading the curve in reverse.
The following Ponderosa Records decision tree was folded back using utility payoffs.
The risk averter has decreasing
marginal util- ility for money. He will
buy casualty insurance and losses weigh
more heavily than like gains.
Risk seekers like some unfavorable
Risk neutrality values money at its face
Important Utility Ramifications
Hybrid shapes (like Shirley’s) imply
shifting attitudes as monetary
Regardless of shape, maximizing
expected utility also maximizes certainty
Therefore, applying Bayes
decision rule with utility payoffs discloses the preferred action.
Primary impediments to implementation:
Clumsiness of the interview process.
Multiple decision makers.
Attitudes change with circumstances and time.
Ratification of Bayes Decision Rule
Over narrow monetary ranges, utility curves resemble straight lines.
For a straight line, expected utility equals the utility of the expected monetary payoff.
Maximizing expected monetary payoff then also maximizes expected utility. Thus:
The Bayes decision rule discloses the preferred action as long as the outcomes are not extreme.
Managers can then delegate decision making without having to find utilities. Preferred actions will be found by the