MA200.2 Game Theory II, LSE

Similar documents
MA200.2 Game Theory II, LSE

MA300.2 Game Theory 2005, LSE

Microeconomics II. CIDE, MsC Economics. List of Problems

Notes for Section: Week 4

PAULI MURTO, ANDREY ZHUKOV. If any mistakes or typos are spotted, kindly communicate them to

The Ohio State University Department of Economics Second Midterm Examination Answers

6.254 : Game Theory with Engineering Applications Lecture 3: Strategic Form Games - Solution Concepts

CUR 412: Game Theory and its Applications, Lecture 9

Introduction to Political Economy Problem Set 3

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

Iterated Dominance and Nash Equilibrium

Final Examination December 14, Economics 5010 AF3.0 : Applied Microeconomics. time=2.5 hours

In Class Exercises. Problem 1

Finitely repeated simultaneous move game.

Microeconomic Theory II Preliminary Examination Solutions Exam date: June 5, 2017

Econ 711 Homework 1 Solutions

Strategy -1- Strategy

Game Theory and Economics Prof. Dr. Debarshi Das Department of Humanities and Social Sciences Indian Institute of Technology, Guwahati.

Exercises Solutions: Game Theory

Introduction to Industrial Organization Professor: Caixia Shen Fall 2014 Lecture Note 5 Games and Strategy (Ch. 4)

Game Theory with Applications to Finance and Marketing, I

EC487 Advanced Microeconomics, Part I: Lecture 9

Econ 101A Final exam May 14, 2013.

Topics in Contract Theory Lecture 1

Economics 109 Practice Problems 1, Vincent Crawford, Spring 2002

UC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer 2016

MATH 4321 Game Theory Solution to Homework Two

HW Consider the following game:

The Ohio State University Department of Economics Econ 601 Prof. James Peck Extra Practice Problems Answers (for final)

PAULI MURTO, ANDREY ZHUKOV

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2017

Lecture 5 Leadership and Reputation

CUR 412: Game Theory and its Applications, Lecture 4

Noncooperative Oligopoly

Chapter 10: Mixed strategies Nash equilibria, reaction curves and the equality of payoffs theorem

Game Theory: Additional Exercises

Problem Set 2 Answers

Microeconomics of Banking: Lecture 5

M.Phil. Game theory: Problem set II. These problems are designed for discussions in the classes of Week 8 of Michaelmas term. 1

Game Theory Fall 2006

Lecture 6 Dynamic games with imperfect information

Answer Key: Problem Set 4

Problem 3 Solutions. l 3 r, 1

University of Hong Kong ECON6036 Stephen Chiu. Extensive Games with Perfect Information II. Outline

GAME THEORY. Department of Economics, MIT, Follow Muhamet s slides. We need the following result for future reference.

Answers to Microeconomics Prelim of August 24, In practice, firms often price their products by marking up a fixed percentage over (average)

Sequential-move games with Nature s moves.

Economics 171: Final Exam

G5212: Game Theory. Mark Dean. Spring 2017

6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2

Game Theory Fall 2003

MIDTERM ANSWER KEY GAME THEORY, ECON 395

Games of Incomplete Information ( 資訊不全賽局 ) Games of Incomplete Information

Introduction to Game Theory Lecture Note 5: Repeated Games

1 Solutions to Homework 3

CS 798: Homework Assignment 4 (Game Theory)

Elements of Economic Analysis II Lecture XI: Oligopoly: Cournot and Bertrand Competition

Econ 302 Assignment 3 Solution. a 2bQ c = 0, which is the monopolist s optimal quantity; the associated price is. P (Q) = a b

CUR 412: Game Theory and its Applications, Lecture 4

Other Regarding Preferences

CHAPTER 14: REPEATED PRISONER S DILEMMA

Game Theory Notes: Examples of Games with Dominant Strategy Equilibrium or Nash Equilibrium

Extensive-Form Games with Imperfect Information

Name. Answers Discussion Final Exam, Econ 171, March, 2012

Appendix: Common Currencies vs. Monetary Independence

Oligopoly Games and Voting Games. Cournot s Model of Quantity Competition:

S 2,2-1, x c C x r, 1 0,0

Game Theory: Normal Form Games

Introduction to Game Theory

Notes for Section: Week 7

Exercises Solutions: Oligopoly

Econ 101A Final Exam We May 9, 2012.

Microeconomic Theory II Preliminary Examination Solutions Exam date: August 7, 2017

CUR 412: Game Theory and its Applications Final Exam Ronaldo Carpio Jan. 13, 2015

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2015

Repeated Games. September 3, Definitions: Discounting, Individual Rationality. Finitely Repeated Games. Infinitely Repeated Games

Advanced Microeconomic Theory EC104

Econometrica Supplementary Material

In the Name of God. Sharif University of Technology. Graduate School of Management and Economics

SF2972 GAME THEORY Infinite games

Strategies and Nash Equilibrium. A Whirlwind Tour of Game Theory

Chapter 33: Public Goods

ECONS 424 STRATEGY AND GAME THEORY MIDTERM EXAM #2 ANSWER KEY

Basic Game-Theoretic Concepts. Game in strategic form has following elements. Player set N. (Pure) strategy set for player i, S i.

Answer Key for M. A. Economics Entrance Examination 2017 (Main version)

Alternating-Offer Games with Final-Offer Arbitration

Finding Mixed-strategy Nash Equilibria in 2 2 Games ÙÛ

Sequential Rationality and Weak Perfect Bayesian Equilibrium

Microeconomic Theory August 2013 Applied Economics. Ph.D. PRELIMINARY EXAMINATION MICROECONOMIC THEORY. Applied Economics Graduate Program

GAME THEORY: DYNAMIC. MICROECONOMICS Principles and Analysis Frank Cowell. Frank Cowell: Dynamic Game Theory

Repeated Games. EC202 Lectures IX & X. Francesco Nava. January London School of Economics. Nava (LSE) EC202 Lectures IX & X Jan / 16

Introduction to Multi-Agent Programming

Microeconomic Theory II Preliminary Examination Solutions

Regret Minimization and Security Strategies

1 Two Period Exchange Economy

Elements of Economic Analysis II Lecture X: Introduction to Game Theory

In reality; some cases of prisoner s dilemma end in cooperation. Game Theory Dr. F. Fatemi Page 219

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India August 2012

Answers to Problem Set 4

Game Theory: Global Games. Christoph Schottmüller

Transcription:

MA200.2 Game Theory II, LSE Answers to Problem Set [] In part (i), proceed as follows. Suppose that we are doing 2 s best response to. Let p be probability that player plays U. Now if player 2 chooses L, her payoff is while if she chooses R, her payoff is pb + ( p)f pd = ( p)h. When p = 0, this boils down to a comparison of f with h, while if p =, this boils down to a comparison of b with d. (a) If f h and b d have the same sign, then player 2 will always want to choose L (assuming f > h and b > d) or will always want to choose R (assuming f < h and b < d), regardless of the value of p. So the best response is flat at the same value of q ( or 0). (b) If f > h and b < d, then player 2 will prefer L below some cutoff value of p and R above, where the cutoff is given by pb + ( p)f = pd = ( p)h. So the best response looks like this: it is q = for p < p, q = 0 for p > p, and q is any value between 0 and when p = p. The other cases are basically identical to (a) and (b). Notice that I have neglected possible equalities (such as f = h) but you do these in a very similar way. (ii) You will need a c and e g to have the same sign, and f h and b d to have the same sign. This ensures that each player has a strictly dominant strategy. (iii) Suppose, on the contrary, that (U, L) is a pure strategy Nash equilibrium. Then a e and b d. Case. b = d. Then it must be that g > c, otherwise (U, R) would be another Nash equilibrium. But then f > h, otherwise (D, R) would be Nash. But this means that to avoid (D, L) being Nash, a > e. So in this case, a > e and b = d. Now let q be the probability of the Column player playing L. For all q (0, ) but close enough to, qa + ( q)c > qe + ( q)g, so playing U for Row is a strictly best response. And if so, Column is indifferent between L and R (since b = d), so all pairs (p, q) with p = and q sufficiently close to are also Nash, a contradiction to uniqueness. Case 2. b > d. But then, because there are no strictly dominant strategies, h f. But then, to avoid (D, R) being Nash, it must be that c > g. But there are no dominant strategies, so that a = e (remember that a e to start with, so this is the only alternative). Now apply

2 the last couple of lines in Case starting from a = e (instead of b = d) to get a similar contradiction. [2] The best way to do these is to simply plot the best responses for each player on the same graph and then check out the intersections. I do this for the first example, the Battle of the Sexes. In the figure below, p is the probability of the Row player playing U and q is the probability of the Column player playing L. Note that there are three equilibria, two in pure and one in mixed, in which p = q = /3. p /3 0 /3 q You can (and should) do the rest yourself. [3] This is a classical model in political science. First take n = 2. Define the unique position x such that half the population have ideal points to the left of x and the other half have ideal points to the right of it. Because there is a continuum of citizens and ideal points have a density function, there is no ambiguity about this definition. Now consider all possible combinations of pure strategies, call them (x, x 2 ). Without loss of generality we can suppose that x x 2. Case. x < x 2. Now there are two subpossibilities. First, each of the candidates is getting an equal number of votes; there is an electoral tie. But then neither player can be playing a best response to each other. For instance, if x is increased slightly by ɛ (so that x + ɛ is still less than x 2 ), then candidate will not lose a single original voter, will gain some (why?) and so will win the elections for sure rather than with probability /2. Second, one of the candidates, say, is a clear winner. But the other candidate can assure herself a 50-50 chance by simply selecting the same position x. Thus it is not possible for two candidates to have distinct positions in Nash equilibrium. Case 2. x = x 2. Again two subpossibilities. One is that this common value is not equal to x, the median defined above. But then any of the players can deviate slightly in the direction of x and get more than half the votes (why?), guaranteeing a victory. In contrast, with x = x 2 each player s chances of victory are only 50-50.

The only remaining possibility is that x = x 2 = x, where both parties converge to the median voter. This is a Nash equilibrium, and it;s the only one as we have seen. If n 3 there is no Nash equilibrium in pure strategies. It is very easy to see this using the methods above. Essentially, one looks through different cases and gets a profitable deviation each time. You may be wondering how the equivalent of median voter convergence gets ruled out. Well, imagine that x i = x for every i, so each candidate s chances are /n. Now have any candidate deviate by choosing a position slightly to the rigt of x. Notice that she will now get everyone to the right of her, which is almost half the population. In contrast, the voters to her left are split up among the remaining candidates (some even stay with her), so that none of her rivals can get more than a quarter. So she wins for sure. [Go back again to case 2 above and see why this argument does not work when n = 2.] [4] (a) Look at the first-order conditions for best responses. The important point here is that you should not blindly write down first-order equality conditions describing a Nash equilibrium. Suppose you do that here: you will get the absurd result that n λ i f ( e i ) =, i= which, of course, cannot hold simultaneously for all the different values of λ i! Of course, once you see this you will understand right away that in any equilibrium e, e i is positive only if λ i is the largest among all the λ s. So in any pure strategy Nash equilibrium, the shareholders who have a share lower than the maximum share call this share M put in zero effort, and the maximum shareholders put in any combination of efforts so that total effort E solves the equation Mf (E) =. (b) Notice that none of the above attains the efficient outcome unless there is one person who gets the entire share! In that case, M = and the last equation in part (a) guarantees that the outcome is efficient. (c) Notice that high inequality is conducive to efficiency in this example. But efficiency just means Pareto optimality: there is no other combination of efforts that can make every person better off. [At this stage, you should convince yourself that every Nash equilibrium under every situation in which at least two persons get a positive share is inefficient in thise sense.] Pareto-optimality isn t everything in life, however, because the outcome is highly inequitable. The result also depends on the assumption that output is a function of the sum of efforts. If output depends in other ways on effort, then efficiency and inequality are not closely related anymore. If you are interested in more on this, look at http://www.econ.nyu.edu/user/debraj/papers/bdr03.pdf [5] let e m be the amount of love and care put in by Mum, and e d the amount put in by Dad. Mum wants to maximize m(e m + e d ), while Dad wants to maximize d(e m + e d ). 3

4 By the same argument as in case (a) of the previous question, both parents cannot simultaneously set their first-order conditions equal to zero (because they have different maximizers). So one parent must put in zero love and care and the other the whole amount. Now show that if Mum has the larger maximizer she must put in all the effort. [3] [a] Consider the maximization problem: n max [u(c i ) v(e i )] subject to i= i= n n c i f( e i ). Of course you can use Lagrangeans to do this, but a simpler way is to first note that all c i s must be the same. For if not, transfer some from a larger c i to a smaller c j : by the strict concavity of u the maximand must go up. The argument that all the e i s must be the same is just the same: again, proceed by contradiction and transfer some from larger e i to smaller e j. By the strict concavity of v the maximand goes up. Note in both cases that the constraint is unaffected. So we have the problem: ( ) f(ne) max u v(e) n which (for an interior solution) leads to the necessary and sufficient first-order condition i= u (c )f (ne ) = v (e ). [b] The (symmetric) equilibrium values ĉ and ê will satisfy the FOC (/n)u (ĉ)f (nê) = v (ê), It is easy to see that this leads to underproduction (and underconsumption) relative to the first best. For if (on the contrary) nê ne, then ĉ c also. But then by the curvature of the relevant functions, both sets of FOCs cannot simultaneously hold. [c] Each person chooses r to maximize ([ ] ) e u β(/n) + ( β) e + E f(e + E ) v(e) where E denotes the sum of other efforts. Let (c, e) denote the best response. Write down the FOC which are necessary and sufficient for a best response: ([ ] ) u e (c) β(/n) + ( β) e + e f (e + E ) + f(e + E ( β)e ) (e + E ) 2 = v (e) Now impose the symmetric equilibrium condition that (c, r) = ( c, te) and E = (n )ẽ. Using this in the FOC above, we get [ ] u ( c) n f ( β)(n )f(nẽ) (nẽ) + n 2 = v (ẽ). ẽ

Examine this for different values of β. In particular, at β = we get the old equilibrium which is no surprise. The interesting case is when β is at zero (all output divided according to work points). Then you should be able to check that u ( c)f (nẽ) < v (ẽ)! [Hint: To do this, use the strict concavity of f, in particular the inequality that f(x) > xf (x) for all x > 0.] But the above inequality means that you have overproduction relative to the first best. To prove this, simply run the underproduction proof in reverse and use the same sort of logic. You should also be able to calculate the β that gives you exactly the first best solution. Notice that it depends only on the production function and not on the utility function. [d] Think about it! [7] (a) 5 b a 2 c d (6,0,6) (8,6,8) e f 3 g h g h (7,0,7) (0,0,0) (0,0,0) (7,0,7) (b) If player 2 plays C with probability then we are done; must initially play B. So all it remains to do is check when player 2 will want to play D with positive probability. This will happen if player 2 anticipates that in the subsequent coordination game between players and 3, one of the two pure strategy equilibria (it does not matter which one) is played. In that case 2 will strictly prefer to play D. But in these pure cordination outcomes player gets 7, so once again he will prefer to play B right at the beginning. [If player 2 anticipates the only remaining equilibrium in the coordination game, which yields an expected payoff to her of 5, she will want to play C, and this case has already been covered.] (c) Suppose that player 2 anticipates that in the subsequent coordnation game one of the two pure strategy equilibria will be played, but that player anticipates the 50-50 mixed strategy equilibrium. In that case player 2 will, indeed, play D (anticipating a payoff of 0), but along this path player only anticipates an expected payoff of 3.5 (why?). So under these beliefs, player will play A.

6 This possibility is ruled out by the definition of a subgame perfect equilibrium because it is assumed that players have common beliefs about the strategy profile that will be played in the game. [8] Version (i): auditor s strategy space is { Audit, Not Audit }, individual s space is all maps from from the above space to { Evade, Not Evade }. Version (ii): auditor s strategy space is [0, ], where p [0, ] represents the probability of audit, individual s space is all maps from from the above space to { Evade, Not Evade }. Version (iii): auditor s strategy space { Audit, Not Audit }, individual s space is { Evade, Not Evade }. Solving these games is very easy. The important point is to note the different interpretations of a probability of audit: in (ii), it is a pure strategy, in (iii) it is a mixed strategy. [9] Basically, we worked this out in class. We showed that in a single-entrant problem the entrant would enter and the incumbent would not fight. Now you should be able to solve the n-entrant problem by using the same logic of backward induction as in the centipede game. This should show that no matter how many entrants you have (as long as they are finite), the entrant will enter and the incumbent will accommodate. [0] [OR Exercise 0.3.] Notice that if Army has strictly more battalions than Army 2, then this will never change over the course of the game no matter what they do. In this case the unique equilibrium of the game is for Army to attack when it can (in these cases K 2) and for 2 never to attack. In this equilibrium Army s payoff is K + x, where x > is the payoff from occupation. By deviating it can get only K. For Army 2 it suffices to consider a one-shot deviation. If it is in an attacking position, then for a profitable attack it must be that L 2 (otherwise it cannot occupy the island). If it attacks, then it loses a battalion. If L 2 then it loses the island next round for good (applying the strategies thereafter), so this deviation is not profitable. So the only case to consider is where they have the same number of battalions; K = L = M. If M =, the attacker does not attack (it wins but cannot occupy). So if M = 2, an attack with permanent occupation thereafter will occur. This means that if M = 3 an attack will not occur. And so on. It follows that if M is odd an attack will not occur, while if M is even and positive, it will occur. [] To discuss this question, consider the matching pennies game L R U,, D,, It should be obvious that no player can gain by committing to move first. The other person will obviously exploit the knowledge of the previous move. Matching pennies is an example of a game in which there is first-mover disadvantage. In contrast, consider any two-player game such that each player has a unique best response to the other player s strategy at each of its pure-strategy Nash equilibria, and assume that at least one such Nash equilibrium exists. Then notice that if a player is given the right to go first, she cannot be worse off. The reason is that she can always, at least, choose the Nash

equilibrium that is best for her and play her part of that strategy profile. She can be assured that her opponent will choose his part of the profile (this is where a unique best response at each Nash equilibrium helps), and so our first-mover can always guarantee herself the payoff from the best possible Nash equilibrium. So she is no worse off when she moves first. But sometimes she can do strictly better. Consider the Cournot duopoly with linear demand curve p = A bq and constant marginal cost c. Let s call the two individual outputs x and y. Calculate y as a best response to x: for each x, y is chosen to FOC (assuming y > 0): or max(a bx by c)y, (A bx by c) by = 0, y = A bx c, 2b at least for all the x s such that A bx c 0 (if not, y = 0). When the player that chooses x moves first, the big difference is that she does not play a best response to y! In fact there is no single number y that describes the strategy of the other player, who moves second. If you draw the game tree you see that the other player s strategy is a specification of y conditional on x. However, we know by subgame perfection that in equilibrium, this conditional strategy must specify best responses for each pre-committed value of x. Thus player maximizes knowing that subsequently, max(a bx by c)y, y = A bx c. 2b The solution to this first-mover problem gives the payoff to moving first. Notice that a feasible solution to this problem is to choose the Cournot-Nash value of x in the simultaneous-move game; then y would also be the Cournot-Nash value (unique), and the first mover can pick up the Cournot payoff at least. But she can do better. I leave the details to you. Solve the maximization problem above, and show that yields a value of x that is different from the simultaneous-move Cournot-Nash value. [2] Call the last period 0. Count backwards at A s offer points and label these points t = {0,, 2,...}. Let A s payoff at point t be a t. Now in the period just previous to t, B proposes and can obviously get αa t by offering A the amount αa t. Therefore what a can get at stage t +, which is just prior to this proposal by B, must be given by a t+ = β( αa t ) = ( β) + αβa t. This is a simple difference equation starting from a 0 and if there are T offer points, the solution a T, which is A s first offer (in real time) is given by a T = ( β)[ + αβ + (αβ) 2 +... + (αβ) T ] + (αβ) T a 0 7

8 Because a 0 is surely bounded, its exact value is irrelevant as long as αβ is less than one. The initial offer converges to a β αβ as T, which is exactly the infinite horizon Rubinstein bargaining solution studied in Lectures 5 and 6. By the same token, B s initial offer must converge to b α αβ. When α = β =, this convergence result breaks down. It can be verified that whenever A is the last person to make an offer, her first period payoff a T equals, otherwise it equals 0 (likewise for B). Such a sequence has no limit as T.