Game Theory Fall 2006

Similar documents
Economic Development Fall Answers to Problem Set 5

Game Theory Fall 2003

MA300.2 Game Theory 2005, LSE

Introduction to Game Theory Lecture Note 5: Repeated Games

Microeconomic Theory II Preliminary Examination Solutions

Repeated Games. Debraj Ray, October 2006

Topics in Contract Theory Lecture 1

THE PENNSYLVANIA STATE UNIVERSITY. Department of Economics. January Written Portion of the Comprehensive Examination for

So we turn now to many-to-one matching with money, which is generally seen as a model of firms hiring workers

Optimal selling rules for repeated transactions.

G5212: Game Theory. Mark Dean. Spring 2017

February 23, An Application in Industrial Organization

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

Advanced Microeconomics

Finite Memory and Imperfect Monitoring

Game Theory. Wolfgang Frimmel. Repeated Games

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

Online Appendix for Debt Contracts with Partial Commitment by Natalia Kovrijnykh

MA200.2 Game Theory II, LSE

CHAPTER 14: REPEATED PRISONER S DILEMMA

Homework 2: Dynamic Moral Hazard

Lecture 5 Leadership and Reputation

Economics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Repeated games OR 8 and 9, and FT 5

Repeated Games with Perfect Monitoring

Problem 3 Solutions. l 3 r, 1

Repeated Games. September 3, Definitions: Discounting, Individual Rationality. Finitely Repeated Games. Infinitely Repeated Games

GAME THEORY. Department of Economics, MIT, Follow Muhamet s slides. We need the following result for future reference.

Economics 171: Final Exam

Infinitely Repeated Games

Strategies and Nash Equilibrium. A Whirlwind Tour of Game Theory

Microeconomics II. CIDE, MsC Economics. List of Problems

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2017

Online Appendix for Military Mobilization and Commitment Problems

Finite Memory and Imperfect Monitoring

The Real Numbers. Here we show one way to explicitly construct the real numbers R. First we need a definition.

14.12 Game Theory Midterm II 11/15/ Compute all the subgame perfect equilibria in pure strategies for the following game:

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India October 2012

1 Appendix A: Definition of equilibrium

Microeconomic Theory II Preliminary Examination Solutions Exam date: June 5, 2017

Bargaining and Competition Revisited Takashi Kunimoto and Roberto Serrano

Duopoly models Multistage games with observed actions Subgame perfect equilibrium Extensive form of a game Two-stage prisoner s dilemma

Economics 431 Infinitely repeated games

Finitely repeated simultaneous move game.

1 x i c i if x 1 +x 2 > 0 u i (x 1,x 2 ) = 0 if x 1 +x 2 = 0

13.1 Infinitely Repeated Cournot Oligopoly

Stochastic Games and Bayesian Games

MA200.2 Game Theory II, LSE

EconS 424 Strategy and Game Theory. Homework #5 Answer Key

M.Phil. Game theory: Problem set II. These problems are designed for discussions in the classes of Week 8 of Michaelmas term. 1

On Existence of Equilibria. Bayesian Allocation-Mechanisms

Final Examination December 14, Economics 5010 AF3.0 : Applied Microeconomics. time=2.5 hours

Reputation and Signaling in Asset Sales: Internet Appendix

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program August 2017

Renegotiation in Repeated Games with Side-Payments 1

In reality; some cases of prisoner s dilemma end in cooperation. Game Theory Dr. F. Fatemi Page 219

Game Theory: Normal Form Games

Comparing Allocations under Asymmetric Information: Coase Theorem Revisited

Relational Incentive Contracts

Microeconomics of Banking: Lecture 5

Repeated Games. Econ 400. University of Notre Dame. Econ 400 (ND) Repeated Games 1 / 48

Prisoner s dilemma with T = 1

Mixed-Strategy Subgame-Perfect Equilibria in Repeated Games

ECE 586BH: Problem Set 5: Problems and Solutions Multistage games, including repeated games, with observed moves

STOCHASTIC REPUTATION DYNAMICS UNDER DUOPOLY COMPETITION

Stochastic Games and Bayesian Games

Regret Minimization and Security Strategies

Name. Final Exam, Economics 210A, December 2014 Answer any 7 of these 8 questions Good luck!

Competing Mechanisms with Limited Commitment

Answers to Microeconomics Prelim of August 24, In practice, firms often price their products by marking up a fixed percentage over (average)

Copyright (C) 2001 David K. Levine This document is an open textbook; you can redistribute it and/or modify it under the terms of version 1 of the

Problem Set 2. Theory of Banking - Academic Year Maria Bachelet March 2, 2017

Rationalizable Strategies

Online Appendix. Bankruptcy Law and Bank Financing

Appendix: Common Currencies vs. Monetary Independence

Outline for Dynamic Games of Complete Information

EC487 Advanced Microeconomics, Part I: Lecture 9

EXTRA PROBLEMS. and. a b c d

Answer Key: Problem Set 4

EconS 424 Strategy and Game Theory. Homework #5 Answer Key

Alternating-Offer Games with Final-Offer Arbitration

Topics in Contract Theory Lecture 3

PAULI MURTO, ANDREY ZHUKOV

Economics 502 April 3, 2008

Introductory Microeconomics

Repeated Games. EC202 Lectures IX & X. Francesco Nava. January London School of Economics. Nava (LSE) EC202 Lectures IX & X Jan / 16

Efficiency in Decentralized Markets with Aggregate Uncertainty

UCLA Department of Economics Ph.D. Preliminary Exam Industrial Organization Field Exam (Spring 2010) Use SEPARATE booklets to answer each question

KIER DISCUSSION PAPER SERIES

Introduction to Game Theory

ECON 803: MICROECONOMIC THEORY II Arthur J. Robson Fall 2016 Assignment 9 (due in class on November 22)

Econ 711 Homework 1 Solutions

Notes for Section: Week 4

Name. Answers Discussion Final Exam, Econ 171, March, 2012

HW Consider the following game:

PROBLEM SET 7 ANSWERS: Answers to Exercises in Jean Tirole s Theory of Industrial Organization

CMSC 474, Introduction to Game Theory 16. Behavioral vs. Mixed Strategies

Warm Up Finitely Repeated Games Infinitely Repeated Games Bayesian Games. Repeated Games

CS364A: Algorithmic Game Theory Lecture #14: Robust Price-of-Anarchy Bounds in Smooth Games

Outline for today. Stat155 Game Theory Lecture 19: Price of anarchy. Cooperative games. Price of anarchy. Price of anarchy

Eco AS , J. Sandford, spring 2019 March 9, Midterm answers

Transcription:

Game Theory Fall 2006 Answers to Problem Set 3 [1a] Omitted. [1b] Let a k be a sequence of paths that converge in the product topology to a; that is, a k (t) a(t) for each date t, as k. Let M be the maximum absolute value of any one-shot payoff in the game. For any ɛ > 0, choose T such that 2δ t M < ɛ/2 for all t > T. Next, choose an index K such that T [ ] (1 δ) δ t f i (a k (t)) f i (a(t)) < ɛ/2 for all k > K. But then, for all k > K, [ ] T [ ] (1 δ) δ t f i (a k (t)) f i (a(t)) (1 δ) δ t f i (a k (t)) f i (a(t)) which proves continuity. + (1 δ) < ɛ, (1 δ) t=t T [ ] δ t f i (a k (t)) f i (a(t)) [ ] δ t f i (a k (t)) f i (a(t)) + 2δ T +1 M [1c] Outputs across time are linked by the production function y t+1 = f(k t ), where f is some increasing, smooth, concave function with f ( ) < 1. This means that there is a maximal sustainable output Y : for all k > Y, f(k) < k (the production function crosses the 45 0 line). This proves that no matter what path of outputs we consider, consumption at any point can never exceed max{y 0, Y }, where y 0 is the historically given initial output. Now proceed in the same way as in part (b). [1d] We ll do a counterexample with just a one-player problem. The player has vector payoffs and the game is described as follows. First the player chooses a or b. If she chooses a, she gets a vector payoff of (1, 1) and the game is over. If she chooses b, she moves again, choosing between L and R. If L, payoff is (3, 0). If R, payoff is (2, 2). The strategy (a, L) is not a subgame perfect equilibrium. one-shot deviation. But it is not improvable by a [2] If δ is close enough to zero then every subgame perfect equilibrium must involve the play of a static Nash equilibrium after every t-history, if the number of actions is finite. To

2 see this, fix any profile of actions a action profile a that is not a Nash equilibrium. Let h(a) max i [d i (a) f i (a)], where d i (a) as usual is the maximum payoff to player i assuming all others are playing a i. Then h(a) > 0, because a isn t a Nash profile. Let h min h(a), where the minimum is taken over all profiles a that are not one-shot Nash. Because there are only finitely many action profiles, h(a) must be positive. Let M and m be the maximum and minimum payoffs available to anyone in the game. Now pick δ close enough to zero such that h > δ (M m). 1 δ It is easy to see that for such δ there is no other subgame perfect equilibrium than the play of a one-shot Nash in every period. This conclusion may be false if there are infinitely many actions available to each player. Take this two-person game in which each A i = [0, 1], and the payoff to player i from (a i, a j ) is 2a j a 2 i. Then each i s dominant strategy is to set a i = 0, but of course, it is easy to check that all symmetric actions (a, a) yield payoffs that are increasing in a (on [0, 1]). For any discount factor, some a > 0 can be supported if 2a(1 δ) 2a a 2. It is easy to see that no matter how close δ is to zero, there is some positive value of a (depending on δ, of course), that can be supported. [3] (a) If the worst punishment to player i does not entail her playing a best response (statically), then player i s continuation payoff must be strictly higher than the worst punishment. Let p i denote the vector that punishes player i and let (a, p, ˆp 1,..., ˆp n ) be any supporter of p i. Then by the condition of support, p i i = (1 δ)f i (a) + δp i, while p i i (1 δ)d i (a) + δˆp i i Combining the two and using the fact that d i (a) > f i (a), we see that p i > ˆpi i pi i. (b) The player can alway play a static best response to the action profile at any date, for every history. This will guarantee her at least her security level in every period and therefore over her lifetime. [4] See class notes. [5] Properties of the support mapping φ. [a] Let p φ(e), then it has supporter (a, p, p 1,..., p n ), where (p, p 1,..., p n ) E. E E. then (p, p 1,..., p n ) lie in E as well. It follows that p φ(e ). If [b] Let p (m) be a sequence of payoff vectors in φ(e) converging to p. We need to show that p φ(e). Attached to each p (m) is a supporter (a (m), p (m), p 1(m),..., p n(m) ). All a (m) s lie in a compact set, and so does the rest of the supporter. Extract convergent subsequence such that (a (m), p (m), p 1(m),..., p n(m) ) (a, p, p 1,..., p n ). By compactness of A and E, this last collection is itself a valid supporter. We have to show that it supports p. The only step to

take care of here is the use of the maximum theorem, to argue that the maximum deviation function d i (a) is continuous in A. [6] Read Kocherlakota s article. [7] (a) Take any p F. Then there exist convex weights λ 1,... λ m (where m is finite) and m action profiles a 1,... a m such that m p = λ k f(a k ). k=1 Because any system of weights can be approximated by a set of rational weights, for every ɛ > 0 there exists an integer N and numbers s 1,..., s m adding to N such that ( s1 N,..., s ) m λ < ɛ. N Consequently, for every ɛ > 0 if I define m p s k = N f(ak ), then k=1 (1) p p < ɛ. Now consider the repeated game, and play the action profiles a 1,... a m in sequence, playing a k s k -many times, for a total of N plays. Repeat this cycle forever. For each δ, let p(δ) be the normalized lifetime payoff thus generated. It is easy to see that (2) p(δ) p as δ 1. Combining (1) and (2), we are done. (b) See class notes. [8] (a) (Outline.) Interpretation 1. As soon as borrower defaults he exits the relationship and receives an additional outside payoff of v. Lender receives nothing from that point. Interpretation 2. Lender and borrower can restart the relationship but each can unilaterally also choose the isolated option of no relationship at any date, in which case the borrower gets a payoff of (1 δ)v for each such date. These two options are equivalent from the point of view of payoffs that can be supportable because in both cases the borrower s security level will be attainable as a subgame perfect equilibrium. (b) Suppose that the same contract (L, R) is offered period after period. If such a contract is honored by the borrower, she gets F (L) R. If she deviates, the worst punishment brings her a lifetime normalized payoff of v, so the deviation payoff is given by (1 δ)f (L) + δv. 3

4 So the constraint is that Rearrannging, we see that F (L) R (1 δ)f (L) + δv. (3) F (L) R δ v. On the other hand, for the lender to willingly advance L every period (and get R in return), we must have (4) R (1 + r)l. Combining (3) and (4), we see the necessity of the condition in the problem. But it is also sufficient, for if the condition holds, simply define (L, R) = (x, (1 + r)x ), where x maximizes the LHS of the condition. (c) To calculate an efficient contract from the class of stationary contracts, simply fix some return for the lender, call it z,a and maximize borrower utility choosing (L, R) subject to the constraint that R (1 + r)l z, and the borrower incentive constraint (3). Consider any situation in which (3) is indeed binding at the solution to this problem. Then if we denote by S(z) the total surplus generated at that z (the sum of the two discountnormalized payoffs), we know that S(z) is strictly decreasing in z. [This should be apparent, but if it isn t, make sure you understand it.] Now we re going to show how to Pareto-improve this stationary package by using a nonstationary sequence while still maintaining all the incentive constraints. Begin by writing down the incentive constraint for any sequence of packages {L t, R t }: (1 δ)f (L t ) + δv (1 δ) δ s t [F (L s ) R s ] for all t, or equivalently, (5) (1 δ)r t + δv (1 δ) s=t s=t+1 δ s t [F (L s ) R s ] for all t. Let s evaluate this constraint in a couple of different situations. First, study it for the second-best stationary package (L, R) that yields the lender z. Let s call the return to the borrower B(z). [Notice that S(z) = B(z) + z.] Then (5) reduces to (6) (1 δ)r + δv δb(z). Now consider the nonstationary sequence in which for some small ɛ > 0, the borrower receives the package (L, R + ɛ) at date 0, and this is followed forever after by the stationary package

that yields the lender z z (1 δ)ɛ/δ. By construction, the lender is absolutely indifferent between the original stationary package and this new two-pronged substitute. What about the borrower? Well, z is down to z so the surplus S(z ) > S(z). Because B(z) + z = S(z), this means that B(z ) is strictly greater than (1 δ)ɛ/δ. It follows from (6) that (1 δ)(r + ɛ) + δv δb(z ), so that this two-pronged sequence satisfies all the constraints. To complete the proof, notice that the borrower is strictly better off, because (1 δ)[f (L) (R + ɛ)] + δb(z ) > (1 δ)[f (L) R] + (1 δ)ɛ + δ[b(z) + (1 δ)ɛ/δ] Read Ray (2002) for more on this stuff. = (1 δ)[f (L) R] + δb(z) = B(z). [9] (a) The laborer s lifetime utility starting from a slack season is u(w ) + δu(w ) + δ 2 u(w ) + δ 3 u(w ) +... = u(w ) 1 δ 2 + δ u(w ) But of course, this evaluation is different if you begin from the peak season (this will be crucial in what follows): u(w ) + δu(w ) + δ 2 u(w ) + δ 3 u(w ) +... = u(w ) 1 δ 2 + δ u(w ) Now suppose that a landlord-employer with a linear payoff function offers the laborer a contract (x, x ), which is a vector of slack and peak payments. (b) The only trick here is to define the action set of the landlord appropriately to guarantee continuity and compactness. Clearly, we can place an upper bound on the contract wages (why?). Also, Let n stand for no contract, in which the laborer s payoff is given by the spot wages. This is an isolated point from the space of all contracts, so continuity will be respected. (c) Presumably, the objective is to help the laborer smooth consumption (while still turning a profit for the landlord), so it makes sense to look at the case in which x > w and x < w. Now, if the offer is made in the slack, there is a participation constraint to be met there, which is that u(x ) (7) 1 δ 2 + δ u(x ) 1 δ 2 u(w ) 1 δ 2 + δ u(w ) But this is only one half of the story. In the peak season the laborer gets only x and therefore has an incentive (potentially) to break the contract, getting w on the spot market. By our assumptions, this breach will make him a spot laboreer ever thereafter. So his payoff contingent on breach is precisely his lifetime utility evaluated from the start of a peak season, so that the self-enforcement constraint simply boils down to (8) u(x ) 1 δ 2 + δ u(x ) 1 δ 2 u(w ) 1 δ 2 + δ u(w ) 5

6 These are the two constraints that have to be met. [Actually, one implies the other see below.] (d) Using (7) and (8), we now show that a mutually profitable contract exists if and only if (9) δ 2 u (w ) > u (w ). First, remove the (1 δ) 2 terms in these constraints to obtain the inequalities (10) u(x ) + δu(x ) u(w ) + δu(w ) and (11) u(x ) + δu(x ) u(w ) + δu(w ) respectively. Next, notice that (11) automatically implies (10) (this is just another instance of the enforcement constraint implying the participation constraint). This is because (11) is just equivalent to δ[u(x ) u(w )] u(w ) u(x ), which implies that u(x ) u(w ) δ[u(w ) u(x )], which in turn is equivalent to (10). So all we have to look for are conditions such that (11) alone is met for some w x x w and such that x + δx > w + δw, which is the profitability condition for the employer. Equivalently, construct the zero-profit locus x = w + δw δx and plug this into (11) to ask if there is some x < w such that u(x ) + δu (w + δw δx ) u(w ) + δu(w ). Notice that the LHS of this inequality is strictly concave in x and moreover at x = w the LHS precisely equals the RHS. So the necessary and sufficient condition for the above inequality to hold at some x distinct from w is that the derivative of the LHS with respect to x, evaluated at x = w, be negative. Performing this calculation, we get the desired answer.