if a < b 0 if a = b 4 b if a > b Alice has commissioned two economists to advise her on whether to accept the challenge.

Similar documents
Problem Set #4. Econ 103. (b) Let A be the event that you get at least one head. List all the basic outcomes in A.

Iterated Dominance and Nash Equilibrium

19. CONFIDENCE INTERVALS FOR THE MEAN; KNOWN VARIANCE

MA300.2 Game Theory 2005, LSE

Do You Really Understand Rates of Return? Using them to look backward - and forward

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2017

MA 1125 Lecture 05 - Measures of Spread. Wednesday, September 6, Objectives: Introduce variance, standard deviation, range.

ECMC49S Midterm. Instructor: Travis NG Date: Feb 27, 2007 Duration: From 3:05pm to 5:00pm Total Marks: 100

Choice under risk and uncertainty

The Binomial Distribution

Discrete Mathematics for CS Spring 2008 David Wagner Final Exam

So we turn now to many-to-one matching with money, which is generally seen as a model of firms hiring workers

The Binomial Distribution

MA 1125 Lecture 14 - Expected Values. Wednesday, October 4, Objectives: Introduce expected values.

Price Theory Lecture 9: Choice Under Uncertainty

STAT 201 Chapter 6. Distribution

1. Forward and Futures Liuren Wu

Expected value is basically the average payoff from some sort of lottery, gamble or other situation with a randomly determined outcome.

Statistics 431 Spring 2007 P. Shaman. Preliminaries

Probability. An intro for calculus students P= Figure 1: A normal integral

ECE 586GT: Problem Set 1: Problems and Solutions Analysis of static games

The Assumption(s) of Normality

Time Resolution of the St. Petersburg Paradox: A Rebuttal

variance risk Alice & Bob are gambling (again). X = Alice s gain per flip: E[X] = Time passes... Alice (yawning) says let s raise the stakes

Name. Answers Discussion Final Exam, Econ 171, March, 2012

CS 361: Probability & Statistics

Copyright (C) 2001 David K. Levine This document is an open textbook; you can redistribute it and/or modify it under the terms of version 1 of the

Sampling Distributions and the Central Limit Theorem

MA 1125 Lecture 12 - Mean and Standard Deviation for the Binomial Distribution. Objectives: Mean and standard deviation for the binomial distribution.

A useful modeling tricks.

ECON Microeconomics II IRYNA DUDNYK. Auctions.

March 30, Why do economists (and increasingly, engineers and computer scientists) study auctions?

5.7 Probability Distributions and Variance

The following content is provided under a Creative Commons license. Your support

If X = the different scores you could get on the quiz, what values could X be?

Notes for Section: Week 7

Econ 711 Homework 1 Solutions

Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals

Explaining risk, return and volatility. An Octopus guide

1.1 Interest rates Time value of money

MIDTERM ANSWER KEY GAME THEORY, ECON 395

MLLunsford 1. Activity: Central Limit Theorem Theory and Computations

Chapter 1 Discussion Problem Solutions D1. D2. D3. D4. D5.

Mixed Strategies. Samuel Alizon and Daniel Cownden February 4, 2009

Elements of Economic Analysis II Lecture X: Introduction to Game Theory

Name. FINAL EXAM, Econ 171, March, 2015

What do you think "Binomial" involves?

4.1 Introduction Estimating a population mean The problem with estimating a population mean with a sample mean: an example...

Have you ever wondered whether it would be worth it to buy a lottery ticket every week, or pondered on questions such as If I were offered a choice

05/05/2011. Degree of Risk. Degree of Risk. BUSA 4800/4810 May 5, Uncertainty

Binomial Random Variable - The count X of successes in a binomial setting

Game Theory Fall 2006

10 Errors to Avoid When Refinancing

MBF2263 Portfolio Management. Lecture 8: Risk and Return in Capital Markets

5.2 Random Variables, Probability Histograms and Probability Distributions

February 23, An Application in Industrial Organization

Economics 171: Final Exam

BEEM109 Experimental Economics and Finance

N(A) P (A) = lim. N(A) =N, we have P (A) = 1.

Game theory and applications: Lecture 1

Answers to chapter 3 review questions

Stock Market Fluctuations

Hedge Portfolios, the No Arbitrage Condition & Arbitrage Pricing Theory

PROBLEM SET 6 ANSWERS

15.053/8 February 28, person 0-sum (or constant sum) game theory

6.042/18.062J Mathematics for Computer Science November 30, 2006 Tom Leighton and Ronitt Rubinfeld. Expected Value I

PAULI MURTO, ANDREY ZHUKOV

Part 1 In which we meet the law of averages. The Law of Averages. The Expected Value & The Standard Error. Where Are We Going?

Common Knowledge AND Global Games

Chapter 6. y y. Standardizing with z-scores. Standardizing with z-scores (cont.)

2. Modeling Uncertainty

CMPSCI 240: Reasoning about Uncertainty

Risk Aversion, Stochastic Dominance, and Rules of Thumb: Concept and Application

Exchange Rate Fluctuations Revised: January 7, 2012

Chapter 14. From Randomness to Probability. Copyright 2010 Pearson Education, Inc.

The Two-Sample Independent Sample t Test

Expected utility theory; Expected Utility Theory; risk aversion and utility functions

Chapter 23: Choice under Risk

Chapter 7. Sampling Distributions and the Central Limit Theorem

Hidden Secrets behind becoming A Forex Expert!

Microeconomic Theory II Preliminary Examination Solutions Exam date: June 5, 2017

Thursday, March 3

University of California, Davis Department of Economics Giacomo Bonanno. Economics 103: Economics of uncertainty and information PRACTICE PROBLEMS

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

Chapter 6: Random Variables. Ch. 6-3: Binomial and Geometric Random Variables

Random Variables and Probability Functions

Probability of tails given coin is green is 10%, Probability of tails given coin is purple is 60%.

16 MAKING SIMPLE DECISIONS

15-451/651: Design & Analysis of Algorithms October 23, 2018 Lecture #16: Online Algorithms last changed: October 22, 2018

Chapter 19: Compensating and Equivalent Variations

In the Name of God. Sharif University of Technology. Graduate School of Management and Economics

(# of die rolls that satisfy the criteria) (# of possible die rolls)

m 11 m 12 Non-Zero Sum Games Matrix Form of Zero-Sum Games R&N Section 17.6

Exercise 14 Interest Rates in Binomial Grids

Symmetric Game. In animal behaviour a typical realization involves two parents balancing their individual investment in the common

Introduction to Blockchains. John Kelsey, NIST

Validating TIP$TER Can You Trust Its Math?

Notes on the Investment Decision

Using the Maximin Principle

THE UNIVERSITY OF TEXAS AT AUSTIN Department of Information, Risk, and Operations Management

Transcription:

THE COINFLIPPER S DILEMMA by Steven E. Landsburg University of Rochester. Alice s Dilemma. Bob has challenged Alice to a coin-flipping contest. If she accepts, they ll each flip a fair coin repeatedly until it turns up tails, earning a score equal to the number of times the coin turns up heads. (Thus if Alice flips HHHT her score is 3.) The high scorer wins, and collects a prize of $4 n (or, if you prefer, the utility equivalent of $4 n ) from the loser, where n is the loser s score. Thus if Alice flips a heads and Bob flips b heads, she ll receive a payment of P (a, b) = { 4 a if a < b 0 if a = b 4 b if a > b () Alice has commissioned two economists to advise her on whether to accept the challenge. Economist One observes that conditional on Bob s score b taking on the particular value b 0, Alice s expected return is E a (P (a, b 0 )) = a=0 2 a+ P (a, b 0) = /2 (2) (A priori, we might have expected this expression to depend on b 0, but it turns out not to.) Thus, no matter what score Bob earns, Alice s expected return is positive. Therefore she should play. Economist Two observes that conditional on Alice s score a taking on a particular value a 0, her expected return is E b (P (a 0, b)) = b=0 2 b+ P (a 0, b) = /2 (3) Many thanks to Paulo Barelli, Hari Govindan and Asen Kochov for enlightening conversations.

Thus, no matter what score Alice earns, her expected return is negative. Therefore she should not play. Who s right? 2. The Economists Make Their Cases. Economist One elaborates thus: Look. Suppose you had a perfectly clairvoyant friend who could predict Bob s score with certainty (but isn t allowed to reveal it to you). That friend, knowing Bob s score to be, say, 3 (or maybe 0 or 7 or 2) would use equation (2) to calculate your expected gain and would surely urge you to play. How can it make sense to ignore the advice of a benevolent friend who has better information than you? Or if you prefer, look at it this way: Suppose Bob flips first. As soon as you learn his score, you know you re going to want to play this game, and you re going to be sorry if you failed to accept the challenge. Surely a policy you know you re going to regret is a bad policy. That proves that if Bob flips first, you should surely accept the challenge. But at the same time, it clearly doesn t matter who flips first, so you should accept the challenge in any event. To put this yet another way, you can view Bob s score as a state of the world over which you have no control. From that point of view, playing is a dominant strategy in every state of the world, it beats not-playing. Any good game theorist will tell you that when you ve got a dominant strategy, you should surely use it. This makes good sense to Alice. It s true she has no clairvoyant friends, but it seems equally true that clairvoyant friends give good advice, and that if she had a clairvoyant friend, this is the advice she d get. Unfortunately, Economist Two counters thus: Ah, yes the imaginary clairvoyant friend trick. Let s run with that. Suppoe your clairvoyant friend can predict your score with certainty. That friend, knowing your score to be, say, 3 (or maybe 0 or 7 or 2) 2

would use equation (3) to calculate your expected gain and would surely urge you not to play. How, indeed, can it make sense to ignore the advice of a benevolent friend who has better information than you? Or if you prefer, look at it this way: Suppose you flip first (or any time at all before Bob s score is revealed). Then as soon as you learn you re own score, you re going to wish you d never agreed to play this game and you know that in advance. Surely a policy you re sure to regret is a bad policy. That proves that if you flip first, you should surely reject the challenge. But as my esteemed colleague Economist One has already observed, it clearly doesn t matter who flips first. So you should reject the challenge in any event. And as for that dominant strategy stuff, why don t we try viewing your score as the state of the world? In that case, not-playing beats playing in every state of the world, so not-playing is the dominant strategy. I agree with Economist One that if you ve got a dominant strategy, you should use it. That s why I think you shouldn t play. In case this doesn t leave Alice sufficiently confused, Economist Three has just arrived and makes this observation: For goodness s sake, this is a zero-sum game, so if the game is good for you then it s bad for Bob. But at the same time, it s a perfectly symmetric game, so if the game is bad for Bob, then it s bad for you. In summary, if the game is good for you then it s bad for you, and by the same argument, if the game is bad for you then it s good for you. The only possible conclusion is that it doesn t matter whether you play or not. Pardon the expresssion, but you might as well flip a coin. Alice believes that each economist has done an excellent job of explaining why his own argument is right. Unfortunately, none of them has even attempted to explain why the other arguments are wrong. 3. The Source of the Problem. While Economist One has calculated Alice s expected return conditional on Bob s 3

score, and Economist Two has calculated Alice s expected return conditional on Alice s score, it occurs to Alice that she might gain some insight by calculating her return unconditionally. That is, Alice wants to calculate the value of P (a, b) (4) 2a+b+2 a,b where P is the payoff function defined in () and (a, b) runs over all possible pairs of scores (i.e. all possible pairs of non-negative integers.) Unfortunately (4) does not converge. Worse yet, the sum of the positive terms diverges to + while the sum of the negative terms diverges to, so it appears that (4) offers no guidance at all. Indeed, if the sum (4) were absolutely convergent then the paradox could never have arisen in the first place, because then Fubini s theorem would allow us to interchange the order of summation and write: a=0 2 a+ b=0 2 b+ P (a, b) = 2 b+ b=0 which, in the presence of (2) and (3), simplifies to P (a, b) 2a+ a=0 2 = 2 Thus if (4) were absolutely convergent, then (2) and (3) could not simultaneously hold. This might seem to suggest a resolution, namely: Economists should not allow themselves to contemplate payoff functions that violate the hypotheses of Fubini s theorem. But where does this leave Alice, who knows nothing of Fubini s theorem but still has a decision to make? 4. Repeated Plays. What can Alice expect if she plays this game repeatedly? We ll consider two scenarios. Scenario One: Suppose first that Bob flips once, generating a score b that Alice repeatedly tries to beat by flipping coins to generate a new score a every day. 4

In this case, (2) tells us that Alice is playing a game with positive expected value, so the Law of Large Numbers is on her side if she plays long enough she can be confident of coming out ahead. She might need to be pretty patient though. Although her expected return is /2, the variance around that expected return is a whopping σ 2 = 4 7 8b 9 28 where b is Bob s score. Thus if Bob earns a score of, say b = 4, Alice finds herself playing a game with expected value /2, a standard deviation over 48, and negative outcomes 3 times as likely as positives. It turns out that in order to have even a 50% chance of coming out ahead, she ll have to play at least 69 times and this number increases extremely rapidly with b. Scenario Two: Suppose instead that Bob and Alice each flip new scores independently each day. Because this is a symmetric zero-sum game, the distribution of Alice s returns must be symmetric around zero. Alice might therefore dare to hope for some version of the Law of Large Numbers, protecting her from large losses if she plays long enough. Alas, this hope is dashed by the main lemma in Section 3 of [F], from which we can extract the following: Let A n be Alice s average return after n plays of the game. Then for small ɛ, the expression Prob( A n < ɛ) does not approach as n gets large. In fact, with a bit more work, one can invoke the results of [L] and prove that things are even worse for Alice: For large M and large N, the expression Prob( A n > M) is approximately equal to K/M where K 2/3 is a constant that does not depend on n or M. In particular, there is no M for which P rob( A n > M) tends to zero as n gets large. Thus Alice cannot use repeated plays to reduce the probability of, say, a $000 average net 5

loss. Indeed, playing twice as many games renders a $000 average net loss just as likely but twice as painful. 5. Resolution, Part I. To resolve Alice s dilemma, we must first be explicit about what s at stake. Do the payoffs in () denote dollars, or do they denote units of utility? In this section, we ll assume the payoffs are denominated in dollars. Thus the arguments of Economist One and Economist Two are valid only if Alice is an expected value (as opposed to expected utility) maximizer. But why should Alice be an expected value maximizer in the first place? There can be two good reasons to maximize expected value. The first assumes repeated play and appeals to the Law of Large Numbers. But in this case, even if we assume repeated play with Bob and Alice flipping independently each time we ve seen that the Law of Large Numbers not only fails, but fails in the strong sense that repeated play actually increases the probability of a given net total loss. So we can dispose of that reason. The second good reason to maximize expected value is that one is really maximizing expected utility, and the amounts at stake are sufficiently small that changes in expected utility are well approximated by changes in expected value. This reason applies only if the amounts at stake are small, which they are arguably not, and only if Alice maximizes expected utility, which I will argue in the next section is not a viable assumption. Thus we ve eliminated both of the good reasons for Alice to maximize expected value and therefore rendered both economists arguments invalid when the payoffs are monetary. 6. Resolution, Part II. Suppose now that the payoffs in () denote units of utility. Then the economists arguments rest on the assumption that Alice is an expected utility maximizer. But why should we believe such a thing? The usual answer is that we envision an agent choosing among some set of lotteries, Although the result above is stated for large n, computer simulations strongly suggest that the distribution of A n looks nearly identical for all values of n from 5 to 5, 000, 000. See the appendix for some data. 6

where a lottery is a probability distribution over some set C. We assume the agent has some preference ordering, we make some assumptions about the properties of that preference ordering, and then we prove that the agent is an expected utility maximizer. There are, of course, innumerable versions of such representation theorems, each with its own technical assumptions about the set C, the set C of allowable probability distributions on which the preference ordering is defined, and the technical properties of the preference ordering. For example, the original vonneumann-morgenstern expected utility theorem assumes that each distribution in C has finite support. Unless one of those theorems applies, we have no reason to believe Alice is an expected utility maximizer and therefore no reason to be swayed by the arguments of Economists One and Two. So in order to take these arguments seriously, we need a theorem, and before we can have a theorem, we need some hypotheses. For the Economists arguments to work, these hypotheses would have to include at least the following assumptions: The set C includes the zero payoff (which Alice can earn by declining to play). The set C includes all possible payoffs P (a, b). For each fixed b 0, the probability distribution that assigns probability /2 a+ to the outcome (a, b 0 ) is in C. For each fixed a 0, the probability distribution that assigns /2 b+ to the outcome (a 0, b) is in C. But there can be no representation theorem with these hypotheses, because if there were, it would imply the contradictory conclusions of Economists One and Two. If we want a representation theorem, then, we have to prohibit Alice from having preferences over some of the lotteries we ve considered. This seems a quite unsatisfactory solution, because all of those lotteries are easily implemented as long as a fair coin is available and it s easy to imagine asking Alice to choose between any two of them. That leaves the option of acknowledging that we have no representation theorem, hence no reason to believe that Alice is an expected utility maximizer, hence no reason to lend any credence to the arguments of either Economist. What, then, should Alice do? Should she or should she not accept Bob s challenge? 7

The answer, of course, is that she should choose whatever she prefers! Presumably she can figure that out for herself. If she can t, no expected utility calculation can help and we shouldn t expect it to. Appendix Let A n be Alice s average payoff if she accepts Bob s challenge n times. The results of Section 4 say that for large n and large M, the probability that A n > M is approximately constant. The question remains How large is large?. Computer simulations suggest that n = 5 is plenty large, in the sense that the distribution of A 5 appears indistinguishable from the distribution of A 5,000,000. Figure shows 00 data points for Alice s average (not total!) return over 5 simulated plays of the game (that is, the computer played five times, computed the average, plotted a point, and repeated this 00 times), then for 50, 500, 5000, and so on up to 5,000,000. Except for a few sporadic outliers, it s hard to discern much difference among these distributions. Figure 2 presents the same data on a different scale that makes it easier to discern the details at the cost of excluding the outliers. 8

300 200 Average payoff from n plays; 00 trials of n plays reported for each n. 00 0-00 -200-300 -400-500 -600 5 50 500 5000 50,000 500,000 5,000,000 0 2 3 4 5 6 7 Figure 50 Average payoff from n plays; 00 trials of n plays reported for each n. 0-60 5 50 500 5000 50,000 500,000 5,000,000 0 2 3 4 5 6 7 Figure 2 9

References [F] W. Feller, Note on the Law of Large Numbers, Annals of Mathematical Statistics 6, 945. [L] Lucia, Mean if i.i.d. Random Variables with No Expected Value, MathOverflow 59222, 204. 0