Game Theory. Wolfgang Frimmel. Repeated Games

Similar documents
Introduction to Game Theory Lecture Note 5: Repeated Games

G5212: Game Theory. Mark Dean. Spring 2017

Duopoly models Multistage games with observed actions Subgame perfect equilibrium Extensive form of a game Two-stage prisoner s dilemma

Economics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Repeated games OR 8 and 9, and FT 5

In reality; some cases of prisoner s dilemma end in cooperation. Game Theory Dr. F. Fatemi Page 219

CHAPTER 14: REPEATED PRISONER S DILEMMA

Repeated Games. September 3, Definitions: Discounting, Individual Rationality. Finitely Repeated Games. Infinitely Repeated Games

Chapter 8. Repeated Games. Strategies and payoffs for games played twice

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

Warm Up Finitely Repeated Games Infinitely Repeated Games Bayesian Games. Repeated Games

REPEATED GAMES. MICROECONOMICS Principles and Analysis Frank Cowell. Frank Cowell: Repeated Games. Almost essential Game Theory: Dynamic.

SI Game Theory, Fall 2008

Prisoner s dilemma with T = 1

February 23, An Application in Industrial Organization

Infinitely Repeated Games

Repeated Games. Econ 400. University of Notre Dame. Econ 400 (ND) Repeated Games 1 / 48

Early PD experiments

Lecture 5 Leadership and Reputation

Answer Key: Problem Set 4

Problem 3 Solutions. l 3 r, 1

Finitely repeated simultaneous move game.

Repeated Games. EC202 Lectures IX & X. Francesco Nava. January London School of Economics. Nava (LSE) EC202 Lectures IX & X Jan / 16

Economics 171: Final Exam

The Nash equilibrium of the stage game is (D, R), giving payoffs (0, 0). Consider the trigger strategies:

M.Phil. Game theory: Problem set II. These problems are designed for discussions in the classes of Week 8 of Michaelmas term. 1

1 Solutions to Homework 4

EconS 424 Strategy and Game Theory. Homework #5 Answer Key

Repeated games. Felix Munoz-Garcia. Strategy and Game Theory - Washington State University

CUR 412: Game Theory and its Applications Final Exam Ronaldo Carpio Jan. 13, 2015

ECON/MGEC 333 Game Theory And Strategy Problem Set 9 Solutions. Levent Koçkesen January 6, 2011

Player 2 L R M H a,a 7,1 5,0 T 0,5 5,3 6,6

ECE 586BH: Problem Set 5: Problems and Solutions Multistage games, including repeated games, with observed moves

Introductory Microeconomics

Repeated Games with Perfect Monitoring

In the Name of God. Sharif University of Technology. Graduate School of Management and Economics

Introduction to Industrial Organization Professor: Caixia Shen Fall 2014 Lecture Note 5 Games and Strategy (Ch. 4)

Stochastic Games and Bayesian Games

ECONS 424 STRATEGY AND GAME THEORY MIDTERM EXAM #2 ANSWER KEY

IPR Protection in the High-Tech Industries: A Model of Piracy. Thierry Rayna University of Bristol

Microeconomics of Banking: Lecture 5

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

Outline for Dynamic Games of Complete Information

Stochastic Games and Bayesian Games

MA300.2 Game Theory 2005, LSE

Introduction to Game Theory

Extensive-Form Games with Imperfect Information

Economics 431 Infinitely repeated games

Optimal selling rules for repeated transactions.

PRISONER S DILEMMA. Example from P-R p. 455; also 476-7, Price-setting (Bertrand) duopoly Demand functions

EconS 424 Strategy and Game Theory. Homework #5 Answer Key

Topics in Contract Theory Lecture 1

CMSC 474, Introduction to Game Theory 16. Behavioral vs. Mixed Strategies

Microeconomic Theory II Preliminary Examination Solutions

Name. Answers Discussion Final Exam, Econ 171, March, 2012

Sequential-move games with Nature s moves.

Lecture 6 Dynamic games with imperfect information

Mixed-Strategy Subgame-Perfect Equilibria in Repeated Games

Discounted Stochastic Games with Voluntary Transfers

An introduction on game theory for wireless networking [1]

Game Theory Fall 2003

CS 331: Artificial Intelligence Game Theory I. Prisoner s Dilemma

Finite Memory and Imperfect Monitoring

Topics in Contract Theory Lecture 3

Simon Fraser University Spring 2014

Notes for Section: Week 4

Economics and Computation

GAME THEORY: DYNAMIC. MICROECONOMICS Principles and Analysis Frank Cowell. Frank Cowell: Dynamic Game Theory

IMPERFECT COMPETITION AND TRADE POLICY

CUR 412: Game Theory and its Applications, Lecture 9

Spring 2017 Final Exam

Iterated Dominance and Nash Equilibrium

Credible Threats, Reputation and Private Monitoring.

is the best response of firm 1 to the quantity chosen by firm 2. Firm 2 s problem: Max Π 2 = q 2 (a b(q 1 + q 2 )) cq 2

Homework 2: Dynamic Moral Hazard

Eco AS , J. Sandford, spring 2019 March 9, Midterm answers

Econ 101A Final exam Mo 18 May, 2009.

Agenda. Game Theory Matrix Form of a Game Dominant Strategy and Dominated Strategy Nash Equilibrium Game Trees Subgame Perfection

SF2972 GAME THEORY Infinite games

CHAPTER 15 Sequential rationality 1-1

UC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer 2016

Not 0,4 2,1. i. Show there is a perfect Bayesian equilibrium where player A chooses to play, player A chooses L, and player B chooses L.

Competition and Regulation. Lecture 4 Collusion

Game theory and applications: Lecture 1

HW Consider the following game:

Exercises Solutions: Game Theory

Microeconomics II. CIDE, MsC Economics. List of Problems

The Ohio State University Department of Economics Second Midterm Examination Answers

CUR 412: Game Theory and its Applications, Lecture 12

Finite Memory and Imperfect Monitoring

GAME THEORY. Department of Economics, MIT, Follow Muhamet s slides. We need the following result for future reference.

Game Theory with Applications to Finance and Marketing, I

Answers to Problem Set 4

Econ 323 Microeconomic Theory. Practice Exam 2 with Solutions

Game Theory. Important Instructions

MKTG 555: Marketing Models

Econ 323 Microeconomic Theory. Chapter 10, Question 1

1 x i c i if x 1 +x 2 > 0 u i (x 1,x 2 ) = 0 if x 1 +x 2 = 0

13.1 Infinitely Repeated Cournot Oligopoly

Game Theory Fall 2006

Econ 101A Final exam May 14, 2013.

Transcription:

Game Theory Wolfgang Frimmel Repeated Games 1 / 41

Recap: SPNE The solution concept for dynamic games with complete information is the subgame perfect Nash Equilibrium (SPNE) Selten (1965): A strategy profile s is a subgame perfect Nash equilibrium of extensive games if the strategy profile s is a Nash equilibrium in all subgames. Any finite extensive form game with complete (but imperfect) information will have a SPNE (possibly involving mixed strategies) Proof: by backward induction SPNE may not necessarily be unique SPNE eliminates all non-credible NE Non-credible NE: there is no credible threat for deviation 2 / 41

Repeated Games Until now, we considered so-called one-shot games with the (implicit) assumption that the game is played once among players who expect to not meet each other again In real life, games are typically played within a larger context and actions affect not only present situation, but they may also have implications for the future 3 / 41

Repeated Games Players may have considerations about the future, affecting also their behavior in the present, i.e. if the same players meet again repeatedly, threats and promises about future behavior can influence current behavior Such situations are captured in repeated games, in which a stage game is played repeatedly Normal-form or extensive-form games are repeated finitely or infinitely regardless of what has been played in previous games, and often with the same set of players The outcome of a repeated game is a sequence of outcomes 4 / 41

Finitely repeated game Definition Let T = {0, 1,..., n} be the set of all possible dates Let G be a stage game with perfect information, which is played at each t T The payoff of each player in this larger game is the sum of payoffs the player receives in each stage game Denote this larger game as G T At the beginning of each repetition, a player considers what each player has played in the previous rounds A strategy in the repeated game G T assigns a strategy to each stage game G 5 / 41

Two-stage game: Prisoners dilemma Consider a situation in which two players play the Prisoners Dilemma game C D C 5,5 0,6 D 6,0 1,1 Now assume, T = {0, 1} and G is this Prisoners Dilemma game. Then, the repeated game G T can be represented in the extensive form as: 6 / 41

Two-stage game: Prisoners dilemma (cont.) At t = 1, a history is a strategy profile of the game, indicating what has been played at t = 0: (C, C ), (C, D), (D, C ), (D, D) G T has 4 subgames - note that payoffs are the sum of payoffs from both games (no discounting)! We have (D, D) as a unique Nash equilibrium at each of these subgames the actions at t = 1 are independent of what is played at t = 0 Given the behavior in t = 1, the game in t = 0 reduces to: C D C 6,6 1,7 D 7,1 2,2 We add 1 to each payoff, as this is the payoff of t = 1 for each player The unique equilibrium of this reduced game is (D, D) This is also a unique subgame-perfect equilibrium: at each history, each player plays D 7 / 41

n-stage games What about arbitrary n? On the last day n, independent of what has been played in the previous rounds, there is a unique Nash equilibrium for the resulting subgame: Each player plays D. The actions at day n 1 do not have any effect in what will be played on the next day. Going back to date 0, we would find a unique SPNE: At each t for each outcomes of previous stage games, players play D This is a general result! 8 / 41

Finitely repeated games with unique NE in stage game Definition Given a stage game G, let G T denote the finitely repeated game in which G is played T times, with the outcomes of all preceding plays observed before the next round. The payoffs for G T are simply the sum of the payoffs from the T stage games. Selten s Theorem If the stage game G has a unique Nash equilibrium then, for any finite T, the repeated game G T has a unique subgame perfect Nash equilibrium: the Nash equilibrium of G is played in every stage Proof can be found in any advanced game theory or micro book 9 / 41

Finitely repeated games with multiple NE in stage game Consider the following modified version of the Prisoners dilemma: C D R C 5,5 0,6 0,0 D 6,0 1,1 0,0 R 0,0 0,0 4,4 There are two pure strategy NE in this stage game Assume that this stage game is played twice Playing any sequence of NE would be a SPNE Now consider the following conditional strategy (conditional on future NE): In the first stage, players anticipate that the second stage outcome will be a Nash equilibrium of the stage game, hence (D, D) or (R, R) Players anticipate that (R, R) will be the second stage outcome if the first-stage outcome is (C, C ), however (D, D) will be the second-stage outcome otherwise 10 / 41

Finitely repeated games with multiple NE in stage game Players first-stage interactions amount to the following one-shot game: C D R C 9,9 1,7 1,1 D 7,1 2,2 1,1 R 1,1 1,1 5,5 There are 3 pure-strategy Nash equilibria - (C, C ), (D, D) and (R, R) 1 The NE (D, D) corresponds to (D, D) in the first-stage and (D, D) in the second-stage 2 The NE (R, R) corresponds to (R, R) in the first-stage and (D, D) in the second-stage 3 The NE (C, C ) corresponds to (C, C ) in the first-stage and (R, R) in the second-stage 11 / 41

Finitely repeated games with multiple NE in stage game (D, D) and (R, R) are concatenations of Nash equilibria outcomes of the stage game (C, C ) is a qualitatively different result - (C, C ) in the first stage game is not a NE Cooperation is possible in the first stage of a SPNE of a repeated game because of a credible threat and punishment However, SPNE depends on assumption about players anticipation in the second stage (see conditional strategy) Our conditional strategy requires playing (D, D) in the second stage, which appears silly if (R, R) is available The credible punishment for a player if deviation from (C, C ) in the first-stage is playing a pareto-dominated equilibrium in the second-stage 12 / 41

Single-deviation principle Verifying that a given strategy profile is a SPNE can be difficult - the game above has 10 subgames, hence 3 10 strategies of each player! Definition Solution: Single-deviation principle Given the strategies of the other players, strategy s i of player i in a repeated game satisfies the single-deviation principle if player i cannot gain by deviating from s i in a single stage game, holding all other players strategies and the rest of her own strategy fixed. Proposition In a finitely repeated game, a strategy profile s is a SPNE if and only if each player s strategy satisfies the single-deviation principle. This proposition also extends to infinitely repeated games, given future payoffs are discounted! 13 / 41

Single-deviation principle Let s check for our example: The single-deviation principle requires to check for single deviations for each player at each stage: Second-stage: If (C, C ) is observed, best response is R (5 + 4 > 5 + 0) First-stage: Deviation in the first-stage would yield a payoff of 6 + 1 < 5 + 4 No deviation is profitable - (C, C ) in the first stage and (R, R) in the second-stage is a SPNE 14 / 41

Finitely vs. infinitely repeated games Credible threats and promises about future behavior can influence current behavior If the relationship is only finitely repeated, this is only true if the stage game has multiple equilibria If G is a static game of complete information with multiple NEs, then the T -times repeated game G T may have SPNEs in which, for any t < T, the outcome in stage t is not a NE of G For infinitely repeated games this result is stronger: even if the stage game G has a unique NE, there may be SPNEs of the infinitely repeated game G T, in which the outcome is not a NE of the stage game G. Hence, infinitely repeated games may be suitable for modeling cooperation sustained by threats and punishment strategies 15 / 41

Infinitely repeated games Simply summing the payoffs from an infinite sequence of stage games does not provide a useful meaasure of players payoff in an infinitely repeated game. Why? Solution: Use the the discounted sum of the sequence of payoffs Each player i has a payoff function u i and a discount factor δ i [0, 1] such that an infinite sequence (s 1, s 2,...) is evaluated by: u i (s 1 ) + δ i u i (s 2 ) + δi 2u i(s 3 ) +... = δi t 1 u i (s t ) The discount factor δ i measures how much a player cares about the future when δ i is close to 0, player i does not care about the future impatient when δ i is close to 1, player i does care about the future patient t=1 16 / 41

Infinitely repeated games The infinitely repeated game differs only by the set of terminal histories, which is the set of infinite sequences (s 1, s 2,...) The payoff is the present value δi t 1 u i (s t ) t=1 Note: one could also use the present value as a measure of payoffs in finitely repeated games One could also reinterpret δ as a repeated game that ends after a random number of repetitions, where δ is the probability that the game ends immediately, and 1 δ that the game continues for at least one more stage Are infinitely repeated games likely to occur? Intuitively, in a lot of long-lasting interactions, the termination date of interaction is typically unknown to players or plays a little role 17 / 41

SPNE in infinitely repeated games Consider again the following prisoners dilemma: C D C 5,5 0,6 D 6,0 1,1 Analogous to finitely repeated games: playing the unique NE (D, D) in every stage game implies a NE in every subgame of the infinitely repeated game SPNE In the presence of credible punishment, we may also get SPNE different from Nash equilibria outcomes of the stage game there are strategies leading to (C, C ) in every stage game as a SPNE Examples of such strategies: (Grim) trigger strategy Tit-for-tat strategy Limited punishment 18 / 41

Strategies in an infinitely repeated game (Grim) trigger strategy s i ( ) = C and s i (s 1,..., s t ) = { C if (sj 1,...st j ) = (C,...C ) D otherwise for every history (s 1,..., s t ) and j is the other player Player i chooses C at the start of the game and after any history in which every previous action of player j was C Whenever player j once chooses D, player i will also switch to action D Once D is reached, this state is never left 19 / 41

Strategies in an infinitely repeated game (cont.) Tit-for-tat The length of the punishment depends on the behavior of the punished player If the punished player continues to deviate with playing D, tit-for-tat continues to do so as well no reversion to C Whenever the punished player reverts to C, then tit-for-tat reverts to C as well In other words: do whatever the other player did in the previous period 20 / 41

Nash equilibrium: (Grim) trigger strategy Assume that player 1 uses the grim trigger strategy If player 2 uses this strategy as well (C,C) will be the outcome in every period with payoffs (5,5,...) The discounted sum is (5 + 5δ + 5δ 2 + 5δ 3 +...) = 5 1 δ If player 2 uses a different strategy, then in at least one period, her action is D In all subsequent periods, player 1 chooses D as well, since it is a best-response Up to the first period to which player 2 chooses D her payoff is 5 each period 21 / 41

Nash equilibrium: (Grim) trigger strategy Player 2 s subsequent sequence of payoffs is (6, 1, 1,...) (gains one unit from deviation, loses one unit due to reaction of player 1) Hence, the discounted sum from deviating is: (6 + 1δ + 1δ 2 + 1δ 3,...) = 6 + δ 1 δ Cooperation is successful if the payoff of cooperation is at least as good as the payoff of cheating: 5 1 δ 6 + δ 1 δ Cooperation if δ 1 5 In this example, only very impatient players with δ < 1 5 can increase their payoff by deviating 22 / 41

Nash equilibrium: Tit-for-tat Assume that player 1 uses the tit-for-tat strategy When player 2 also adheres to this strategy, the equilibrium outcome will be (C, C ) in every period Now assume, D is a best response to tit-for-tat for player 2 Denote t as the first period, where player 2 chooses D player 1 will choose D in period t + 1 onwards, until player 2 reverts to C Player 2 has two options from period t + 1 onwards: revert to C and face the same situation as at the start of the game, or continue with D, in which case player 1 will continue to do so as well So if player 2 s best response to tit-for-tat is choosing D in some period, then she either alternates between C and D or chooses D in every period 23 / 41

Nash equilibrium: Tit-for-tat (cont.) Payoff for alternating between C and D is (6, 0, 6, 0...) = 6 1 δ 2 Payoff for staying with D is (6, 1, 1,...) = 6 + Payoff of playing tit-for-tat is (5, 5, 5,...) = 5 1 δ δ 1 δ Hence, tit-for-tat is best response to tit-for-tat if and only if: 5 1 δ 6 1 δ 2 and 5 1 δ 6 + δ 1 δ Both of these conditions are equivalent to δ 1 5 Whenever δ 1 5, a strategy pair, in which both players use tit-for-tat, is a Nash equilibrium 24 / 41

Folk Theorem A main objective of studying repeated games is to explore the relation between the short-term incentives and long term incentives When players are patient, their long-term incentives take over, and a large set of behavior may result in equilibrium. The equilibrium multiplicity is a general implication of (infinitely) repeated games This main result is stated in the so-called Folk Theorem Before, we need to introduce two further definitions 25 / 41

Feasible payoffs We call payoffs (x 1,..., x n ) feasible in the stage game G if the payoffs are a convex combination (i.e. a weighted average) of the pure strategy payoffs of G Graphically, the set of feasible payoffs for the prisoners example: 0,6 5,5 1,1 6,0 Pure strategy payoffs (1, 1), (6, 0), (0, 6) and (5, 5) are feasible All other pairs in the shaded-region are weighted averages of pure-strategy payoffs 26 / 41

Average payoffs Players payoffs are still defined over the present value (PV ) of the infinite payoff stream but expressed in terms of the average payoff from the same infinite sequence of payoffs. The average payoff is the payoff that would have to be received in every stage game so as to yield the PV Definition Given the discount factor δ, the average payoff of the infinite sequence of payoffs u 1, u 2,... is (1 δ) δi t 1 u i (s t ) t=1 Note that if there is a fixed payoff stream ū in every stage, the PV would ū ū be 1 δ the average payoff is (1 δ)pv = (1 δ) 1 δ = ū The average payoff is directly comparable to payoffs from a stage game Since average payoff is just a rescaling of the PV, maximising the average payoff is equivalent to maximising the PV 27 / 41

Folk Theorem The Folk Theorem states that any strictly rational and feasible payoff vector can be supported in a SPNE when the players are sufficiently patient Folk Theorem Let G be a finite static game of complete information. Let (e 1,..., e n ) denote the payoffs from a Nash equilibrium of G, and let (x 1,..., x n ) denote any other feasible payoffs from G. If x i > e i for every player i and if δ is sufficiently close to one, then there exists a subgame perfect Nash equilibrium of the infinitely repeated game G that achieves (x 1,..., x n ) as the average payoff. 28 / 41

Folk Theorem: Proof (I) The proof follows the arguments for the infinitely repeated Prisoners dilemma Let (a e1,..., a en ) be the NE of G that yields the equilibrium payoffs (e 1,..., e n ) Let (a x1,..., a xn ) be the set of actions yielding the feasible payoff (x 1,..., x n ) Consider the standard trigger strategy for players i = (1,..., n): Play a xi in the first stage. In the t th stage, if the outcome of all t-1 preceding stages has been (a x1,..., a xn ), then play a xi ; otherwise play a ei Assume that all players have adopted this trigger strategy 29 / 41

Folk Theorem: Proof (II) Since the others will play (a e1,..., a e,i 1, a e,i+1,..., a en ) forever, once one stage s outcome differs from (a x1,..., a xn ), playing a ei is a best response for player i once the outcome differs from (a x1,..., a xn ) What is the best response for player i in the first stage and any stage where the preceding outcome has been (a x1,..., a xn )? Let a d1 be player i s best deviation from (a x1,..., a xn ) and d i is the corresponding payoff from this deviation Hence we have the payoff relationship d i x i > e i The present value of the sequence from player d i is d i + δe i + δ 2 e i +... = d i + δe i 1 δ 30 / 41

Folk Theorem: Proof (III) Alternatively, playing a xi will yield a payoff of x i. If playing a xi is optimal, the present value of is x i + δx i + δ 2 x i +... = x i 1 δ If playing a di is optimal, the present value of is (see before) d i + δe i 1 δ So, playing a xi is optimal if and only if or x i 1 δ d i + δe i 1 δ δ d i x i d i e i 31 / 41

Folk Theorem: Proof (IV) Since this threshold value for a best response may differ among players, it is only a NE for all the players to play the trigger strategy if and only if δ max i d i x i d i e i The threshold discount factor for trigger strategy being a NE is determined by (short-term) gain from deviation higher short-term gain from non-cooperation more difficult to achieve cooperation (long-term) loss from deviation higher short-term loss from non-cooperation (i.e. stronger punishment) easier to achieve cooperation 32 / 41

Folk Theorem: Proof (V) Is this Nash equilibrium also subgame perfect, hence is it a Nash equilibrium in every subgame of G(, δ)? Two classes of subgames: 1 subgames with all outcomes of earlier stages have been (a x1,..., a xn ) 2 subgames in which the outcome of at least one earlier stage differs from (a x1,..., a xn ) If players adopt trigger strategy for first class of games, then players strategies in a subgame are again the trigger strategy (a x1,..., a xn ); we just showed that this is a NE for the game as a whole If players adopt trigger strategy for second class of games, then players strategies are to repeat the stage-game equilibrium (a e1,..., a en ), which is also a NE for the whole game. The trigger-strategy Nash equilibrium of the infinitely repeated game is subgame perfect (if δ is sufficiently large) 33 / 41

Folk Theorem The Folk theorem implies that any point in the area above or right to the red lines can be achieved as the average payoff in a SPNE, if the discount factor is sufficiently large 0,6 5,5 M(B) 1,1 6,0 Main message: Although repeated games allow for cooperative behaviour, they also allow for an extremely wide range of behavior 34 / 41

Application: Cartels Task: Demand is given by p = A Q, marginal cost is constant and equal to c, where A > c There are n firms in the market, the stage game is Cournot Firms discount factor is δ (0, 1) 1 Find the critical value of the discount factor to sustain collusion if firms use grim trigger strategies. Assume that collusive behavior involves equal sharing of monopoly output and profits 2 How does the minimum discount factor depend on the number of firms? 35 / 41

Application: Efficiency wages (Shapiro & Stiglitz, 1984) Firms induce workers to work harder by paying high wages and threatening to fire workers caught shirking Firms reduce their demand for labor, so some workers are employed at high wages, but involuntary unemployment increases Larger pool of unemployed workers threat of firing increases In competitive equilibrium, wages w and unemployment rate u just induce workers not to shirk, and labor demand at w results in unemployment rate u 36 / 41

Efficiency wages: Stage game Firm offers the worker a wage w. If the worker rejects w, she becomes self-employed at wage w 0. In case of acceptance of w, the worker chooses either to supply effort (with disutility e) or to shirk (without any disutility) Effort decision is unobserved by firm, but worker s output (low : y = 0 or high : y > 0) not For simplicity: In case of high effort, output is high, but if worker shirks, output is low If firm employs worker at wage w, payoff with effort are y w for the firm and w e for the worker; if the worker shirks, e = 0 and if output is low, y = 0 Assume that y e > w 0, hence it is efficient for the worker to be employed and supply effort 37 / 41

Efficiency wages: Stage game Backward induction: worker observes wage offer w: if w w 0 : reject if w > w 0 : accept and set e = 0 (maximises payoff w e!) firms will anticipate e = 0, so they set w 0 worker will choose self-employment Is there a way to offer a wage premium w > w 0 in an infinitely repeated game? Yes, if there is a credible threat to fire the worker in case of low output Consider the following strategy: Firm offers w = w > w 0 in the first period, and in each subsequent period if output is high, but offer w = 0 otherwise Workers accept firm offer and provide effort if w w 0, but shirk otherwise Trigger strategy: play cooperatively provided that all previous plays have been cooperative, but switch forever to the SPNE of the stage game in case of deviation. 38 / 41

Efficiency wages: Worker Firm offers w and worker accepts. If the worker provides effort, output will be high, so the firm will offer w also in the next period If it is optimal for the worker to provide effort, the present value of worker s payoff is: V e = (w e) 1 δ If the worker shirks, the present value of worker s payoff is: V s = w + δw 0 1 δ Incentive to supply effort exists if or (w e) 1 δ w + δw 0 1 δ w w 0 + e δ 39 / 41

Efficiency wages: firm Firm decides between w = w, hence inducing effort by threatening to fire in case of low output leaving a profit of y w, and w = 0, hence inducing worker to choose self-employment, leaving a profit of zero in each period It is optimal for the firm to offer a wage w = w if y w 0 Since y w y w 0 + e δ Cooperation is a Nash equilibrium if e δ y w 0 40 / 41

Efficiency wages: Equilibrium Consider that we have sequential-move stage game, where workers observe wage offers. Cooperation if δ is sufficiently large is the SPNE if firms set w = w (hence the high-wage, high-output histories) What is the SPNE for all other histories? Workers will never supply effort, so firms induce them to choose self-employment by setting w = 0 by the next stage permanent self-employment If worker is ever caught shirking, w = 0 forever after; if firm deviates from offering w = w, then worker set e = 0 forever after, so firms cannot afford to employ the worker Cooperation is only a SPNE if renegotiation is not feasible 41 / 41