by Douglas Zare
4 November 2014
When I started to play backgammon seriously, my intuition about probabilities became much better, and I think this is a common experience for backgammon players. We not only have to compare risks against each other, such as when we decide to break contact, we also make absolute estimates because of the doubling cube, and we get numerical feedback from the bots. Nevertheless, economists say we are not perfectly rational about risks. In this column, we'll look at a fundamental idea from economics about how economic agents approach risks. (We will continue the series of articles on roll variance in the bearoff later.)
The most naive approach to evaluating a random outcome is to try to maximize the average result. Suppose you have a chance to take a gamble that loses $10 40% of the time, and wins $7 60% of the time. Is it a good deal? The expected value is -10 x 0.4 + 7 x 0.6 = +0.2. The average result is positive, a gain of 20 cents, so if we are trying to maximize the average result, we prefer this over 0.
Would you risk your life savings for a 51% chance to double up? Your average result would increase if you take that gamble, but most people would not do that.
Another approach is expected utility maximization
, trying to maximize your average level of satisfaction, or utility. This helps to explain risk aversion such as declining the 51:49 proposition above. Many people would feel that losing their life savings costs a lot more happiness than doubling their life savings would gain. We might say the utility of losing everything is 0, the current value is 100, and doubling our life savings is worth 110. If so, we would need to be 10:1 favorites to be indifferent to trying to double our life savings. A 51:49 gamble would have an expected utility of 0.51 x 110 = 56.1, far below the value of 100 from not gambling.
White redoubles tied 4-away 4-away.|
Similarly, if White redoubles in this last roll position, it's right to pass. Taking would net more points on average this game, but the points don't have equal utility. Passing would let you win about 33% of the time trailing 2-away 4-away, while taking would only let you win 11/36 = 31% of the time.
We don't have an objective measure of satisfaction the way we have match equity tables we trust, but we can make up a utility function that reports a number for each outcome. For example, we could say the utility of $x is log x, which results in the Kelly criterion
for rational bankroll management. This might work for some people, but others might not like the tradeoffs recommended by the Kelly criterion. By observing which gambles people accept or decline, we can try to determine the properties the utility function must have, assuming that people are trying to maximize their expected utility.
To a theorist, the idea of maximizing expected utility is very attractive. This is a simple, consistent approach. If you are trying to maximize expected utility, and you are deciding between options A, B, and C, then it doesn't matter if you decide between A and B first, then compare the winner with C, or compare B and C first, then pit the victor against A. You can evaluate the expected utility to a number, and the choice with the highest number is preferred regardless of the order of the comparisons.
Is maximizing expected utility what people do? Is it what people should do?
|The rest of this article (19.99 K) is premium content. Please subscribe below.
Article text Copyright © 1999-2018 Douglas Zare and GammonVillage Inc.