Economics 352: Intermediate Microeconomics Notes and Sample Questions Chapter 18: Uncertainty and Risk Aversion Expected Value The chapter starts out by explaining what expected value is and how to calculate it, then shows why it isn t really a good way to analyze individuals decisions. Expected value is basically the average payoff from some sort of lottery, gamble or other situation with a randomly determined outcome. For example, if the probability that you will have a bike accident in a year is 0.02 and the loss from that accident would be $1000, then your expected loss from a bike accident in any one year is: E[loss] = 0.98*$0 + 0.02*$1000 = $20. For example, one of the bets offered on a craps table (a dice game that is correctly identified as the crack cocaine of the casino) is a field bet. This is a bet on one roll of the dice that results in the player winning $1 if the numbers 2, 3, 4, 9, 10 or 11 come up and winning $2 if 12 comes up, but losing $1 if any other number comes up. The expected payoff is: E[payoff] = $1 15 36 + $2 1 36 $1 20 36 = $1 3 36 = $0.0833 You should know that the probabilities are based on there being 36 possible outcomes from rolling two dice, of which 1 results in a 2 being rolled, 2 rolls result in a 3, 3 rolls result in a 4 and so on. Now, a gamble with an expected payoff of 0 is a fair bet. The example in the book is flipping a coin and winning $1 if the coin comes up heads and losing $1 if the coin comes up tails. The expected payoff is zero. It seems reasonable that a person would be willing to accept a fair bet, but it isn t true. Imagine that you had the opportunity to bet your house on the flip of a coin. If the coin comes up heads you win an identical house or the cash value of your house. If the coin comes up tails, you lose your house. Despite the fact that this is a fair bet most people wouldn t choose to take it.
The example given in the book is the St. Petersburg paradox. This is a game with an infinite expected payoff that most people wouldn t pay very much to play. The conclusion from these two examples (betting your house on a coin flip and the St. Petersburg paradox) is that there must be more to understanding decisions under uncertainty than expected value. The more important thing is expected utility, and it is expected utility that people want to maximize. von-neumann-morgenstern Utility When a person faces some gamble, that gamble has an expected value, but it also has an expected utility. The expected utility of a gamble is the utility associated with an amount that, if received with certainty, would make a person just as well off as if she faced the gamble. That s sort of an awful explanation. Try this example. You have some amount of wealth. For the sake of argument let s say that it s $1,000,000. This wealth is fairly secure, but there is a 3% probability that some bad event will happen and you will lose all of your wealth. So you face a gamble that gives you expected wealth of: E[wealth] = $1,000,000*0.97 + $0*0.03 = $970,000. Now, imagine that you could pay some amount that would allow you assure that you would not lose your wealth if the bad event happens. What is the maximum amount you would be willing to pay for that, and basically insure against the bad event? Let s imagine that you would be willing to pay, at most, $100,000 to eliminate the probability of losing all of your wealth. The payment of $100,000 would leave you with a wealth of $900,000 for sure. Put somewhat differently, you would be indifferent between the gamble with an expected value of $970,000 and having $900,000 for sure. So, the expected utility from facing the gamble is equal to the utility from having $900,000 for sure. Put somewhat differently (and this isn t in the book) $900,000 is the certainty equivalent of the gamble. It is the amount of money that, if received with certainty, gives you the same level of utility as your expected utility from the gamble. The standard assumption is that individuals maximize their expected utility when facing uncertainty.
Risk Aversion The usual assumption about sane people is that they like to avoid risk and uncertainty, especially when the potential loss is large relative to their total wealth. In terms of real world behavior, the person who puts down a $5 bet on a sporting event (seemingly liking risk with small amounts of money) will buy insurance against loss of their house and against large health care bills (seemingly avoiding risks potentially involving large amounts of money). One way of stating risk aversion is that if you face a gamble in which you have wealth level w 1 with probability p 1 and wealth level w 2 with probability p 2, you would prefer to have the expected level of wealth E[w] = w 1 p 1 + w 2 p 2 rather than face the risk. Put somewhat differently, the utility level associated with having the expected level of wealth for sure, U(E[w]), is greater than the expected utility level from facing the gamble, E[U(w)]. The standard diagram of this is below. Imagine that there are two levels of wealth that are equally likely. The curve shows the utility of wealth function for a risk averse individual. The levels of wealth associated with the gamble, w 1 and w 2 are equally likely, so the expected level of wealth is midway between w 1 and w 2. The utility associated with w 1 is U(w 1 ) and the utility associated with w 2 is U(w 2 ), and expected utility from facing the gamble is midway between these two utility levels. Now, the utility level associated with having the expected level of wealth, U(E[w]) for sure is greater than the expected utility of wealth from the gamble, E[U(w)]. Again, with the gamble, you have wealth of w 1 with probability 0.5 and you have wealth of w 2 with probability, so the expected wealth, E[w], is midway between these. With the gamble, you have utility of U(w 1 ) with probability 0.5 and you have utility of wealth of U(w 2 ) with probability 0.5, so the expected utility E[U(w)] is midway between these. In addition, if you were to be able to have the expected level of wealth, E[w], for sure, you would be better off than you were facing the gamble. Your level of utility would be U(E[w]).
A graph showing a standard utility of wealth curve and a person facing two levels of wealth, one high and one low. The graph also shows the expected level of wealth, the utility associated with the expected level of wealth and expected utility from facing the risk. For example, if we have w1=10,000 and w2=90,000, each with probability 0.5 and U(W)=W 0.5, then: E[W] = 0.5*10,000 + 0.5*90,000 = 50,000 U(w1) = 10,000 0.5 = 100 U(w2) = 90,000 0.5 = 300 E[U(W)] = 0.5*100 + 0.5*300 = 200 U(E[W]) = U(50,000) = 50,000 0.5 = 223.6 So, if this person faces the risk of losing 80,000 on a coin flip and having wealth of 90,000 with probability 0.5 and wealth of 10,000 with probability 0.5, then their expected utility level is 200. If, however, they can avoid this risk and have the expected level of wealth, 50,000, for sure, their utility level rises to 223.6. In addition, for the risk described above, we can describe the certainty equivalent of wealth as being that level of wealth that, if you received it with certainty, would make
you just as well off as if you faced the risk. In this case, we can calculate the certainty equivalent, CE, to be: U(CE) = CE 0.5 = 200 CE = 200 2 = 40,000 So, this person would be indifferent between facing the risk (W=10,000 with probability 0.5 and W=90,000 with probability 0.5) and receiving 40,000 for sure, despite the fact that the expected wealth if facing the risk is 50,000. Put somewhat differently, the maximum amount that this person would be willing to pay for insurance that would allow them to avoid the risk would be 10,000, or the difference between the expected wealth (50,000) and the certainty equivalent of wealth, 40,000. Another example of this is given in Example 18.2 of the textbook. In terms of the above diagram this is: A graph showing the numbers from the above example. The textbook also uses this diagram to show that, holding E[W] constant, as the difference between w1 and w2 increases, expected utility falls. Put somewhat differently, a person who would take a small risk on a coin flip would not necessarily take a large risk on a coin flip. You might be willing to bet $1 on a coin flip, but you would not bet $100,000 on a coin flip.
The reason behind this is that risk aversion is associated with diminishing marginal utility of wealth. That is, the U(W) curve is positively sloped, so that more wealth makes you better off, but it flattens out as W increases, so that the additional utility associated with a marginal increase in wealth gets smaller and smaller as wealth increases. This means that the additional utility you get from a $1,000 increase in wealth is smaller than the loss of utility you would suffer from losing $1,000. Put more simply, in terms of utility of wealth, fair bets aren t worth it. Measuring Risk Aversion The presentation in the text gets fairly technical and even involves a Taylor expansion. I ll try to focus on the important stuff here. First, one measure of the degree to which a utility function reflects risk aversion is: U' ' (W) r (W) = > 0 U'(W) If a person is risk neutral, their U(W) function will be linear and U (W) = 0, so r(w)=0. As the degree of risk aversion increases, the second derivative of utility gets bigger (in absolute terms) and the utility function basically gets curvier and r(w) gets larger. As r(w) increases, a person becomes more and more risk averse. A person who is more risk averse is willing to pay more for insurance to avoid a risk. If utility of wealth is given by an exponential function: U(W) = e AW A>0 Then r(w) is independent of wealth, r(w) is constant and we have constant risk aversion. A person with constant risk aversion would be willing to pay the same amount to avoid a risk of a fixed amount regardless of their base level of wealth. This is strange. Constant risk aversion suggests that a person would be willing to pay the same amount for insurance against a $100,000 loss regardless of whether their base wealth was $100,000 or $10,000,000. It makes more sense to expect that as a person s wealth increases, their willingness to pay for insurance against some risk of a fixed dollar amount would fall. A more reasonable measure of risk aversion is relative risk aversion,
rr(w) = W r(w) = W U' '(W) U' (W) Two types of utility functions display constant relative risk aversion. They are: W A U(W) = A U(W) = ln W It is worth noting that ln W is the special case of the first of these when A=0. Constant relative risk aversion is nice because willingness to pay for insurance depends on how large the risk faced is relative to base wealth. As wealth increases, a person becomes more willing to take a risk of a fixed dollar amount. For example, a person with preferences characterized by constant relative risk aversion and wealth of $10,000 might not make a $100 bet, but the same person with a wealth of $1,000,000 would make the bet. As the bet becomes a smaller percentage of wealth, the person is more willing to make the bet. The State-Preference Approach to Choice Under Uncertainty This section discusses an approach to dealing with uncertainty in a context that is something like the usual consumer utility model. For the purposes of this section, the two goods are wealth in the good state of the world (like when your house doesn t burn down) and wealth in the bad state of the world (like when your house does burn down). States of the world are situations in the future that you might face, each of which will happen with some probability. The section here discusses two states of the world (a hurricane hits your town or doesn t hit your town), but it is more realistic (and mathematically difficult) to talk about multiple states of the world. For example, instead of saying that a hurricane hits your town or doesn t hit your town, a more realistic but challenging model might discuss the severity of the conditions when a hurricane comes ashore near your town. In any case, we ll restrict the analysis to only two states of the world. So, the two states of the world are the good state and the bad state. The levels of wealth associated with these are W g and W b and the probabilities of these states of the world occurring are π and 1-π.
There is a utility of wealth function, U(W), which is state independent. That is, a person has the same utility function regardless of which state of the world they find themselves in. Expected utility depends on a person s wealth in each state of the world, W g and W b, the associated levels of utility, U(W g ) and U(W b ), and the probabilities of each of these, π and 1-π: V(W g,w b ) = π U(W g ) + 1-π U(W b ) Now, a key to understanding this is that the wealth in each state of the world is something that a person purchases before they find out what state of the world they re in. So, before you find out whether or not your house burns down you spend your initial wealth, W, to purchase wealth in particular states of the world. This sounds weird, but it s basically like making bets or buying insurance. For example, imagine that the two states of the world are that a coin comes up heads and that it comes up tails. You start out with $100, but you can t keep it. You must use that $100 to purchase income in the case that the coin comes up heads and income in the case that the coin comes up tails. These levels of wealth, which the book calls contingent commodities because they are contingent on which state of the world actually occurs, must be purchased at some prices, p g and p b. The budget constraint is: W = p g W g + p b W b For example, in the above coin-flipping example, if markets for contingent commodities are fair, you should be able to buy $1 of wealth in the heads state of the world for $0.50. That is, you should be able to pay $0.50 before the coin flip to get $1 if the coin comes up heads. In this case, the price of W g (if the good state of the world is heads) would be 0.50, which is also π. Similarly, you should be able to buy $1 of wealth in the bad state of the world for $0.50, which is equal to 1- π. As another example, imagine that the good state of the world would occur with probability 0.8 and the bad state of the world with probability 0.2. In this case, if markets are fair (meaning, basically, that customers would pay the expected value to buy the contingent good) then, before it is known which state of the world happens, a person should be able to buy $1 in the good state of the world for $0.80 and $1 in the bad state of the world for $0.20. So, the price of $1 in any state of the world should be equal to the probability of that state of the world occurring, if markets are fair. The price of $1 in the good state of the world is π and the price of $1 in the bad state of the world is 1- π.
The unavoidable analogy here is to betting on a horserace. The amount that you need to bet in order to get $1 back is very low if you re betting on a horse with a small probability of winning, but will be close to $1 for a horse that everyone thinks is very likely to win. There is a bit of an assumption here that markets for contingent goods ($1 if the good state of the world happens and $1 if the bad state of the world happens) is fair and that everyone knows and agrees upon the probabilities of each state of the world. To put this all together, we have the utility function and budget constraint: V(W g, W b ) = π U(W g ) + 1-π U(W b ) W = p g W g + p b W b If a person maximizes utility then the ratio of the marginal utilities should be equal to the ratio of the prices: V W g V W b = ( ) π U' W g (1 π) U' W b p g = p b = π ( ) 1 π A bit of mutliplying yields: U' (W g ) U' (W b ) = 1 U'(W g ) = U'(W b ) And because the utility function is independent of the state of the world, we have W g = Wb That is, a utility maximizing person will make bets (also known as insurance contracts) so that their wealth in the good state of the world is equal to their wealth in the bad state of the world. This is a reflection of risk aversion in that, if insurance markets are fair, a person will choose to make contracts that will eliminate the risks that they face. Some Practice Try the following questions from the book: 18.1, 18.3, 18.4, 18.5, 18.7
1. For a person with U(W)=W 1/2 facing wealth of 90,000 with probability 0.9 and wealth of 0 with probability 0.1, calculate the following: a. expected wealth b. expected utility c. utility of expected wealth d. certainty equivalent of wealth Show each of these on a standard U(W) diagram. U' ' (W) 2. Calculate r ( W) = for the following utility functions: U'(W) a. U(W) = W b. U(W) = W 0.5 c. U(W) = ln W d. U(W) = -W -1 U' ' (W) 3. Calculate rr ( W) = W r(w) = W for the following utility functions: U' (W) a. U(W) = W b. U(W) = W 0.5 c. U(W) = ln W d. U(W) = -W -1