Boston Library Consortium IVIember Libraries

Size: px
Start display at page:

Download "Boston Library Consortium IVIember Libraries"

Transcription

1

2

3 Digitized by the Internet Archive in 2011 with funding from Boston Library Consortium IVIember Libraries

4

5 UUl working paper department of economics REPEATED GAMES WITH LONG-RUN AND SHORT-RUN PLAYERS By Drew Fudenberg David Kreps Eric Ma skin No. 474 January 1988 massachusetts institute of technology 50 memorial drive Cambridge, mass

6

7 REPEATED GAMES WITH LONG-RUN AND SHORT-RUN PLAYERS By Drew Fudenberg David Kreps Eric Maskin No. 47A January 1988 M.I.T., Department of Economics; Stanford University, Graduate School of Business; and Harvard University, Department of Economics, respectively.

8 M.I.T. LIBRARIES JUL RECEIVED

9 ABTRACT This paper studies the set of equilibrium payoffs in games with longand short-run players and little discounting. Because the short-run players are unconcerned about the future, equilibrium outcomes must always lie on their static reaction (best response) curves. The obvious extension of the Folk Theorem to games with this constraint would simply include the constraint in the definitions of the feasible payoffs and of the minmax values. This extension does obtain under the assumption that each player's choice of a m ixed strategy for the stage game is publicly observable, but, in contrast to standard repeated games, the limit value of the set of equilibrium payoffs is different if players can observe only their opponents' realized actions.

10

11 1. Introduction The "folk theorem" for repeated games with discounting says that (under mild conditions) each individually-rational payoff can be attained in a perfect equilibrium for a range of discount factors close to one. It has long been realized that results similar to the folk theorem can arise if some of the players play the constituent game infinitely often and others play the 'constituent game only once, so long as all of the players are aware of all previous play. A standard example is the infinitely-repeated version of Selten's [1977] chain-store game, where a single incumbent faces an infinite sequence of short-run entrants in the game depicted in Figure 1. Each entrant cares only about its one-period payoff, while the incumbent maximizes its net present value. For discount factors close to one there is a perfect equilibrium in which entry never occurs, even though this is not a perfect equilibrium if the game is played only once or even a fixed finite number of times. In this equilibrium, each entrant's strategy is "stay out if the incumbent has fought all previous entry; otherwise, enter;" and the incumbent's strategies is "fight each entry as long as entry has always been fought in the past, otherwise acquiesce." Other examples of games with long and short run players are the papers of Dybvig-Spatt [1980] and Shapiro [1982] on a firm's reputation for producing high-quality goods and the papers of Simon [1951] and Kreps [1984] on the nature of the employment relationship. This paper studies the set of equilibrium payoffs in games with longand short-run players and little discounting. This set differs from what it would be if all players were long-run, as demonstrated by the prisoner's dilemma with one enduring player facing a sequence of short-run opponents. Because the short-run players will fink in every period, the only equilibrium

12 is the static one, no matter what the discount factor. In general, because the short-run players are unconcerned about the future, equilibrium outcomes must always lie on their static reaction (best response) curves. This is also true off of the equilibrium path, so the reservation values of the long-run players are higher when some of their opponents are short-run, because their punishments must be drawn from a smaller set. The perfect folk theorem for discounted repeated games (Fudenberg-Maskin [1986]) shows that, under a mild full-dimensionality condition, any feasible payoffs that give all players more than their minmax values can be attained by a perfect equilibrium if the discount factor is near enough to one. The obvious extension of this result to games with the constraint that short-run players always play static best responses would simply include that constraint in the definitions of the feasible payoffs and of the minmax values. Propositions 1 and 2 of Section 2 shows that this extension does obtain under the assumption that each player's choice of a mixed strategy for the stage game is publicly observable. We then turn to the more realistic case in which players observe only their opponents' realized actions and not their opponents' mixed strategies. While in standard repeated games the folk theorem obtains in either case, when there are some short-run players the set of equilibria can be strictly smaller if mixed strategies are not observed. The explanation for this difference is that in ordinary repeated games, while mixed strategies may be needed during punishment phases, they are not necessary along the equilibrium path. In contrast, with short-run players some best responses, and thus some of the feasible payoffs, can only be obtained if the long-run players use mixed strategies. If the mixed strategies are not observable, inducing the long-run players to randomize may require that "punishments" occur with positive

13 probability even if no player has deviated. For this reason the set of equilibrium payoffs may be bounded away from the frontier of the feasible set. Proposition 3 of Section 3 provides a complete characterization of the limiting value of the set of equilibrium payoffs for a single long-run player. This characterization, and the results of Section 2, assume that players have access to a publicly observable randomizing device. The device is used to implement strategies of the form: if player i deviates, then players jointly switch to a "punishment equilibrium" with some probability p < 1. While the assumption of public randomizations is not implausible, it is interesting to know whether it leads to a larger limit set of equilibrium payoffs. Proposition 4 in Section 4 shows that it does not: We construct "target strategies" in which a player is punished with probability one whenever his discounted payoff to date exceeds a target value, and shows that these strategies can be used to obtain as an equilibrium any of the equilibrium payoffs that were obtained via public randomizations in Proposition 3. Proposition 3 shows that not all feasible payoffs can be obtained as equilibrium, so in particular we know that some payoffs cannot be obtained with the target strategies of Proposition 4. Inspection of that construction shows that it fails for payoffs that are higher than what the long-run player can obtain with probability one given the incentive constraint of the short-run players: For payoffs this high, there is a positive probability that player 1 will suffer a run of "bad luck" after which no possible sequence of payoffs could draw his discounted normalized value up to the target. As this problem does not arise under the criterion of time-average payoffs, one might wonder if the set of equilibrium payoffs is larger under time-averaging. Proposition 5 shows that the answer is yes. In fact, any feasible incentive compatible payoffs can arise as equilibria with time-averaging, so that we

14 obtain the same set of payoffs as in the case where the player's privately mixed strategies are observable. This discontinuity of the equilibrium set in passing from discounting to time averaging is reminiscent of a similar discontinuity that has been established for the equilibria of repeated partnership games (Radner [1986], Radner-Myerson-Maskin [1986]). The relationship between the two models is discussed further in Section 5. We have not solved the case of several long-run players and unobservable mixed strategies. Section 5 gives an indication of the additional complications that this case presents. 2. Observable Mixed Strategies Consider a finite n-player game in normal form, g: S.x xs -> R We denote player i's mixed strategies by cr e X, and write g(<7) for the expected value of g under distribution cr. In this section we assume that a player can observe the others' past mixed strategies. This assumption (or a restriction to pure strategies) is standard in the repeated games literature, but as Fudenberg-Maskin [1986] [1987a] have shown, it is not necessary there. (Here it matters see the next section!) We will also assume that the players can make their actions contingent on the outcome of a publicly observable randoming device. Label the players so that players 1 to j are long-run and j+1 to n are short-run. Let B: EiX...xEj=^Ij^i...xI^

15 . be the correspondence which maps any strategy selection (<7...,a.), for the long-run players to the corresponding Nash equilibria strategy selections for the short-run players. If there is only one short-run player, B(a) is his best response correspondence. For each i from 1 to j, choose m = (m.,...,m ) so that m solves ^ min max g {<T^,m_^), m egraph(b) o. and set v^ = max g (f^^,in_^) 1 (This minimum is attained because the constraint set graph (B) is compact and the function max g (cr.,m_.) is continuous in m_..) <j 1 The strategies m_. minimize long-run player i's maximum attainable payoff over the graph of B. The restriction to this set reflects the constraint that the short-run players will always choose actions that are short-run optimal. Given this constraint, no equilibrium of the repeated game can give player i less than v.. (In general, the short-run players could force player one's payoff even lower using strategies that are not short-run optimal). Note that m specifies player i's strategy m., which need not be a best response to m_.: Player i must play in a certain way to induce the short-run players to attain the minimum in the definition of m. In order to construct equilibria in which player i's payoff is close to v., player i will need to be provided with an incentive to cooperate in his own punishment.

16 In the repeated version of g, we suppose that long-run players maximize the discounted normalized sum of their single-period payoffs, with common t=oo discount factor 5. That is, long-run player i's payoff is (l-5)\ 5 g.(a(t)). Short-run players in each period act to maximize that period's payoff. All players, both long- and short-run, can condition their play on all previous actions. t=0 Let U = i V = (v,...v ) I 3a in graph (B) with g{o') = vl Let V = convex hull of U; and let V = \v^v\ for all i from 1 to j, v > v.). We call payoffs in V attainable payoffs for the long-run player. Only payoffs in V* can arise in equilibrium. We begin with the case of a single long-run player. Proposition 1: If only player one is a long-run player, then for any v.ev* there exists a &e(0,l) such that for all 6e(5,l), there is a subgame-perf ect equilibrium of the infinitely repeated game with discount factor 6 in which player i's discounted normalized payoff is v.. El22l- ^ix s v,e V and consider the following strategies. Begin in Phase A, where players play a a e graph (B) (or a public randomization over such cr's) that gives player 1 payoff v.. Deviations by the short-run players are ignored. If player one deviates, he is punished by players switching to the punishment strategy m for T(6) periods, after which play returns to Phase A; if T(6) is large enough, deviations in Phase A are unprofitable. Now m- need not be a best response against m_., so we must insure that player one

17 . does not prefer to deviate during the punishment phase. This is done by specifying that a deviation in this phase restarts the punishment. Since the most that player 1 can obtain in any period of the punishment phase is v, he will prefer not to deviate so long as T(S) is short enough that player I's normalized payoff at the start of the punishment phase is at least v. Let v. = max g. (f). The two constraints on T(5) will be satisfied if: CTegraph(B) (1) (l-5)v^+ 6(l-6'^^^^g^(m-^)+6^^^^^^v^s v^, or equivalently (1' ) 6T(^)+1 <{v^-6g^(m^) + (l-6)v^)/(v^-g^(m^)), and T(5) 1 T(6) (2) (1-5^ )g^(m ) + d^ v^ > v^, or equivalently (2') 5^^ ^ > {v^-g^(ffi^))/{v^-g^(m^)) The right-hand sides of inequalities (1' ) and (2' ) have the same denominator, and for 5 close to 1 the numerator of (1' ) exceeds the numerator of {2' ) T Then since 6 is approximately continuous in T for 5 close to 1, we can find a 5 < 1 such that for all greater 5 there is a T(5) satisfying (1' ) and (2' ). Q.E.D. In repeated games with three or more players, a full-dimensionality condition is required for all feasible individually rational payoffs to be enforceable when 6 is near enough to one. The corresponding condition here is that the dimensionality of V* equals the number of long-run players.

18 . ' Proposition 2: Assume that the dimensionality of V* = j, the number of long-run players. Then for each v in V*, there is a 6e(0,l) such that for all 6e{5,i) there is subgame-perf ect equilibrium of the infinitely repeated game with discount factor 5 in which player i's normalized payoff is v.. Remark: The proof of Proposition 2 follows that of Fudenberg-Maskin' s Theorem 2: If a (long-run) player deviates, he is punished long enough to wipe out the gain from deviation. To induce the other (long-run) players to punish him, they are given a "reward" at the end of the punishment phase. One small complication not present in Fudenberg-Maskin is that, as in Proposition 1, the player being punished must take an active role in his punishment. This, however, can be arranged with essentially the same strategies as before. Proof : Choose a o (or a public randomization over several cr's) so that q(o) = v. Also choose v' in the interior of V* and an c > so that for all i from 1 to i (v' +c,...,v'.,+c,v'.,v'. +c,,v'. +c ) is in V* and v'. + c < v. Let T be a joint strategy that yields v'. + c to all the long-run players but i, and yields v'. to i. Let w. = g.(m ) be player i's period payoff when j is being punished with the strategies m. For each i, choose an integer N. so that v.+ N.v. < (N+1) v' where v. = max g is i's greatest one-period payoff. Consider the following repeated-game strategy for player i;

19 (0) Obey the following rules regardless of how the short-run players have played in the past: (A) Play <7. each period as long as all long-run players played o last period, or if o had been played until last period and two or more long-run players failed to play o last period. If long-run player j deviates from (A), then (B.) Play m. for N. periods, and then (C) Play T-? thereafter. If long-run player k deviates in phase (B.) or (C), then begin phase (B.) again with j = k. (As in phase A, players ignore simultaneous deviations by two or more long-run players.) As usual, it suffices to check that in every subgame no player can gain from deviating once and then conforming. The condition on N. ensures that for S close to one, the gain from deviating in Phase A or Phase C is outweighed by Phase B's punishment. If player j conforms in B. (i.e. when she is being N.. N. punished) her payoff is at least g.s (1-6 ')w. +5 v'., which exceed v. if 5 is close enough to one. If she deviates once and then conforms, she receives at most V. the period she deviates, and postpones the payoff q. > v., which lowers her payoff. If player k deviates in Phase B., she is minmaxed for the next N, periods and Phase-C play will give her v' instead of v' + c. Thus it is easy to show that such a deviation is unprofitable. (See Fudenberg-Kaskin for the missing computations.)

20 3. Unobservable M ixed Strategies We now drop the assumption that players can observe their opponents' mixed strategies, and instead assume they can only observe their opponents' realized actions. In ordinary repeated games, (privately) mixed strategies are needed during punishment phases, because in general a player's minmax value is lower when his opponents use mixed strategies. However, mixed strategies are not required along the equilibrium path, since desired play along the path can be enforced by the threat of future punishments. Fudenberg-Maskin showed that, under the full-dimension condition of Proposition 2, players can be induced to use mixed strategies as punishments by making the continuation payoffs at the end of a punishment phase dependent on the realized actions in that phase in such a way that each action in the support of the mixed strategy yields the same overall payoff. In contrast, with short-run players some payoffs (in the graph of B) can only be obtained if the long-run players privately randomize, so that mixed strategies are in general required along the equilibrium path. As a consequence, the set of equilibrium payoffs in the repeated game can be strictly smaller when mixed strategies are not observable. This is illustrated by the following example of a game with one long-run player. Row, and one short-run player. Col. Let p be, the probability that Row plays D. Col's best response is M if s p < 1/2, L if 1/2 s p < 100/101, and R if p s 100/101. There are three static equilibria: the pure strategy equilibrium (D,R), a second in which p = 1/2 and Col mixes between M and L, and a third in which p = 100/101 and Col mixes between L and R. Row's maximum attainable payoff is 3, which occurs when p = 1/2 and Col plays L. 10

21 . 4, 0, 1-1, , 2 1, 1 0, 3 Fi gure 1 If Row's mixed strategy is observable, she can attain this payoff in the infinitely repeated game if S is near enough to 1. If however Row's mixed strategy is not observable, her highest equilibrium payoff is at most 2 regardless of S To see this, fix a discount factor S, and let v (6) be the supremum over all Nash equilibria of ROW's equilibrium payoff. Suppose that for some 6 * v(5)=2 + c' >2, and choose an equilibrium <> such that player I's payoff is v(-^) = V (5) - c > 2. It is easy to see that the set of equilibrium payoffs is stationary: Any equilibrium payoff is an equilibrium payoff for any subgame, and conversely. Thus, the highest payoff player 1 can obtain starting from period 2 is also bounded by v (5). Since v(- >) is the weighted average of player I's first-period payoff and her expected continuation * payoff, player I's first-period payoff must be v (5) - c/(i-o). For c sufficiently small, this implies that player I's first period payoff must exceed 2. In order for Row's first-period payoff to be at least 2, Col must play L with positive probability in the first period. As Col will only play L if Row randomizes between U and D, Row must be indifferent between her first period choices, and in particular must be willing to play D. Let v be Row's expected payoff from period 2 on if she plays D in the first period. Then we 11

22 must have (3) 2(1-5) + 5vjj = V*. But since v_^ ^ v*, we conclude that v* ^ 2. While Row cannot do as well as if her mixed strategies were observable, she can still gain by using mixed strategies. For 5 near enough to one there is an equilibrium which gives Row an normalized payoff of 2, while Row's best payoff when restricted to pure strategies is the static equilibrium yielding 1. To induce Row to mix between U and D, specify that following periods when Col expects mixing and Row plays U, play switches with probability p to (D,R) for ten periods and then reverts to Row randomizing and Col playing L. The probability p is chosen so that Row is just indifferent between receiving 2 for the next eleven periods, or receiving 4 today and risking punishment with probability p. This construction works quite generally, as shown in the following proposition. one, and let Proposition 3: Consider a game with a single long-lived player, player V. = max min cregraph B s. esupp o ^i ^^i ' ^-i^ Then for any v ^(Y-i-v ) there exists a 5' < 1 such that for all 5e(5',1), there is an equilibrium in which player one's normalized payoff is v. For no ^. o is there an equilibrium where player one's payoff exceeds v. * 12

23 Proo f: We begin by constructing a "punishment equilibrium" in which player one's normalized payoff is exactly v.. If v. is player one's payoff in a static equilibrium this is immediate, so assume all the static equilibria given player one more than v. The strategies we will use have two phases. The game begins in phase A, where the players use m, a strategy which holds player one's maximum one-period payoff to v. If player one plays s., players publicly randomize between remaining in phase A and switching to a static Nash equilibrium for the remainder of the game. If e. is one's payoff in this static equilibrium, set the probability of switching after x., p(s.), to be (4) p(s. ) = (l-5)(v^ - g,(s^,m_j)) 1' y,, 1 1 ~ ^1 ^^l'"-l (If 5 is near enough to one, p(s.) is between and 1.) The switching probability has been constructed so that player one's normalized profit is v for all actions, including those in the support of m, so she is indifferent between these actions. * * * * * Next we construct strategies yielding v. for v. ^ Yi "^^^ ^ ~ ^^i' *^-i^ be the corresponding mixed strategies. Play begins in phase A with players following a. If player one deviates to an action outside the support of c then switch to the "punishment equilibrium" constructed above. If player one plays an action s. in the support of o, then switch to the punishment equilibrium with probability p(s ), and otherwise remain in phase A. The probability p(s.) is chosen so that player one's payoff to all actions in the * support of o is v.. As above, this probability exists if 6 is near enough to one. These strategies are clearly an equilibrium for large 5. 13

24 Equilibrium payoffs between v and v are obtained by using public randomizations between those two value. The argument that player one's payoff cannot exceed v is exactly as in the example. 4. No Pub lic R andom izat ions The equilibria that we constructed in the proofs of Theorems 1 through 3 relied on our assumption that players can condition their play on the outcome of a publicly observed random variable. While that assumption is not implausible, it is also of interest to know whether the assumption is necessary for our results. For this reason, Proposition 4 below extends Proposition 3 to games without public randomizations. (We have not thought about the possible extension of Propositions 1 and 2 because we think the situation without public randomizations but where private randomization can be verified ex-post is without interest.) The intuition, as explained in Fudenberg-Kaskin [1987c], is that public randomizations serve to convexify the set of attainable payoffs, and when 5 is near to 1 this convexif ication can be achieved by sequences of play which vary over time in the appropriate way. Fudenberg-Kaskin [1987c] shows that public randomizations are not necessary for the proof of the perfect Folk Theorem. However, as we have already seen, there are important differences between classic repeated games and repeated games with some short-run players, so the fact that public randomizations are not needed for the folk theorem should not be thought to settle the question here. 14

25 For Proposition 4: Consider a game with a single long-run player, player 1, where public randomizations are not available. As in Proposition 3, let V = max min o e graph (B) s.e supp o ^l^^i' "^-1^ ' * * and let cr be a strategy that attains this max. Then, for any v. e (v.,v.) there exists a S' <1 such that for all 5e(6',\) there is a subgame-perf ect equilibrium where player I's discounted normalized payoff is v. Remark : Fix a static Nash equilibrium o with payoffs v.. each v. the proof constructs strategies that keep track of the agent's total realized payoff to date t and compares it to the "target" value of (1-5 ) v., which is what the payoff to date would be if the agent received v in every period. If V exceeds the payoff in a static equilibrium, then play initially follows the (possibly mixed) strategy o, and whenever the realized total is sufficiently greater than the target value, the agent is "punished" by reversion to the static equilibrium. If the target is less than the static equilibrium, then play starts out at the (possibly mixed) strategy m, with intermittent "rewards" of the static equilibrium whenever the realized payoff drops too low. Proof: (A) It is trivial to obtain v as an equilibrium payoff. 15

26 (B) To attain any payoff v between v and v we proceed as follows. Renormalize the payoffs so that v. = 0, and take 6 large enough that (l-6)v < v.. Define Jq = and» (0) = a, and for each time t>0 define the strategies -^ (h ) and an index J as follows: J^ = J^_^+ (l-6)6^^"^^g^(s^(t-l),/^(h^_^), where s, (t-1) is player I's action in period t-1 (as opposed to his choice of mixed strategy,) and R =(1-6 )v.. If player I's payoff were v. each period, then his accrued payoff J would equal R.. The equilibrium strategies will "punish" the agent whenever J exceeds R by too large a margin. More precisely, we define o- if J^ > R^^, and J_ s R_ for all T<t t t+1 T T ^*(h^.) = * o if J^ < R^^, and J_ ^ R_ for all Tst t t+1 T T if J^ < R^ for any T<t Note that since J is a discounted sum, for each infinite history h^, J converges to a limit J^. Moreover, as long as the other players use strategy -s_,, player I's payoff to any strategy is simply the expected value of J^, and his expected payoff in any subgame starting at time t is * We will now argue that (i) if player 1 uses strategy <>., then J ^ R. for all times t and histories h, which imples that J^^ 2: v, (ii) that regardless of how player 1 plays, ^^^ v, so player I's payoff in the subgame starting at time t is bounded by 5 (v -J ) for all histories h^^, and (iii) that in any subgame where at some "^^t J < R, it is a best response for all players to follow the prescribed strategy of always playing 16

27 ) the static equilibrium o, and so Jg^ ^ ^i ' Conditions (i) and (ii) imply that it is a best response for player * 1 to play -d in every subgame where J has never dropped below R, and that player I's equilibrium payoff is v. Condition (iii), whose proof is immediate, says that following -s^ is also a Nash equilibrium in subgames where J has dropped below R, so that <> is a subgame-perf ect equilibrium. (The condition that the short-run players not wish to deviate is incorporated in the construction of ^. * Proof of (i) : We must show that if player 1 follows <>. then for all t, J^^(l-S )v.= R. Since Jq= 0, this is true for t=0. Assume it is true for t=t. At period t, either (a) J^ i ^r+i ^ ^^^ "^t *''^t+i' "^^ ^^^^ ^^^ ' * * fi (t) = o. Since g. (s,,ct_ ) = for every pure strategy s. in the support of a,, we have J^,i = J^ - and J 2: R by inductive hypothesis. In case (b), * iz "k "k "k "k <> (h^)=o-,so min lg^{s^(-r),-a_^(h^)) ls^(t)esupport(<^^{h^)) 1 = v^, and J^_^^ - >^T "^ {l-6)6'^v* ^ (1-5 '^)v* + (l-6)o'^v* = (1-5 '^^^)v^ = R^_^^, where the second inequality comes fron the inductrive hypothesis. Thus J s (1-6 )v for all t, and so if player 1 follows -ft. then J^s: v.. Proof of (ii) : Next we claim that regardless of how player 1 plays, ^^^ v. If for some history ^qq ^ ^i ' '^^^'^ there is a T' such that J s: (1-6 "^ * all t > T'. Thus *_i (h ) = o for all t >T', and since the most player 1 can get when his opponents play o is zero, we have that J^ ^ J,. Let T be the )v^ for smallest such T' so that J - _, < (l-6'^)v. (Since J = 0, T > 0.) T - T Then Jqd - J^ < J _- + 5 (1-5 )v < J + 5 v < v, where we have substituted in our bound on 6. This argument also shows that player I's payoff in the subgame starting at time t is bounded by 6 (v.-j ), and from part (i) this 17

28 payoff can be attained by following -f^., * (C) : Next we show how to construct equilibria (for large enough S) that yield payoffs v between v and the static equilibrium payoff of zero. Pick a v-e:(v,0), and choose 6 large enough that (l-5)min g,{<^) > v.. Then set J^ o * 1 = 0, and -i^ (0) = m. Now define J^(h.) and *{h ) as follows. t * Set J^ = J^_^ + 6 g^(s^{t), ^_j^{h^)), and set /(h^) =^ r if J, < R^,, m^ if J^ 2 R^ Proceeding as above, we claim that * (i) if 1, player uses strategy <». then J, ^ R^ for all times t and histories h, which imples that J^ - v, (ii) that regardless of how player 1 plays, ^^ - v, so player I's payoff in the subgame starting at time t is no greater than 5 (^i"''*.^ ' ^^"^ (iii) that in subgames where J <R t K, it is a best response for player one to * play -a (h^) = '^^.^ Proof of (i): We must show that if player 1 follows <>. then for all t, J^ (1-5 )v = R. Since J= 0, this is true for t=0. Assume it is true for t=t. At period t, either (a) J^ ^ R^, or (b) J^ <R^^,, 18

29 , In case (a) T T J^,- ^ J +5 (l-5)min{g, ) ^ J^+ 6 v. (from our bound on 5) T >R^ +6 V (by inductive hypothesis) =R,. T+1 In case (b), ^ (h^)=a, so g^ (s^ (t),-d_^ (h^) )=0 for all s^ (T)esupport ^^(h^), and J^^^ = J^ ^ R^. R^^^. t * * Thus J.^(l-S )v. for all t, and so if player 1 follows <>. then J^^ v. Proof of (ii) : We claim that for all strategies of player 1 and all times t and histories h., J^ (1-5 )v. = R. Since J^= 0, this is true for t=0. Assume it is true for t='^. Then at period t, either (a) J < R^._^, or (b) J^ -^T+i- ^"^ ^^^^ ^^^' ^ ^^^ ^ ^' ^ '^T+i - "^T ^ ^T+l' * 2 * In case (b), -o (h^)=m,so max 5-i (s., o_.{h)) = v, and "l T T T J^, < J^ + (1-5)5 V < (l-o )v +(1-5)6 V (by the inductive hypothesis),,-t + l s (l-o )v (from the bound on o) = R ^. Thus J <(l-5 )v for all t, and so regardless of how player 1 plays, J^^ v. (iii) Conditions (i) and (ii) show that in any subgame with J. ^R. player 1 can attain the upper bound of v. by following <>.. Now we consider subgames with J < R^ If J^ - v, then regardless of how player 1 plays, we will have J^ ^ R for all t i t, so player I's opponents will play o for the remainder of the game. Here it is clearly a best response for player 1 to play o = o. If J > v, then by playing cr player 1 can ensure that J^ 2: R^ at some '^>t, which ensures that player 1 attains a payoff of v in the subgame starting at t. If player 1 instead chooses a strategy which 19

30 assigns positive probability to the event that J < ^t- ^or all t > t, he c an only lowe his payoff: The payoff for histories with J < R is less than v., * '^ ' '^-" (BOO ]_ and the payoff for the histories with J_^R_ is bounded above by v,. 00 CD 1 Q.E.D. Proposition 4 shows how to attain any payoffs between v and v by means of "target strategies." From Proposition 3 we know that such strategies cannot be used to attain higher payoffs. We think that it is interesting to note where an attempted proof would fail. Ik In part (A), we proved that if player 1 followed <> then for every sequence of realizations player I's payoff is at least v. Imagine that we * ^ t. try to attain a payoff v > v, by setting the target R = (l-o )v. Then m the "reward" phases where o is played, it might be that player I's realized payoff is less than v,. (Recall that by definition it cannot be lower than v.). After a sufficiently long sequence of these outcomes, player I's realized payoff J would be so much lower than v that even receiving the best possible payoff at every future date would not bring his discounted normalized payoff up to the target. This problem of going so far below the target that a return is impossible does not arise with the criterion of time-average payoffs, since the outcomes in any finite number of periods are then irrelevant. For this reason we can attain payoffs above v. under time averaging, as we show in the next section. 5. Time Averaging The reason that player I's payoff is bounded by what he obtains when she plays her least favorite strategy in the support of o is that every time she plays a different action she must be "punished" in a way that makes all of the actions in o equally attractive. A similar need for "punishments" along the 20

31 , equilibrium path occurs in repeated partnership games, where two players make an effort decision that is not observed by the other, and the link between effort and output is stochastic. Since shirking by either player increases the probability of low output, low output must provoke punishment, even though low output can occur when neither player shirks. This is why the best equilibrium outcome is bounded away from efficiency when the payoff criterion is the discounted normalized value. (Radner-Myerson-Maskin [1986] Fudenberg-Maskin [1987a]). However, Radner [1986] has shown that efficient payoffs can be attained in partnerships with time averaging. His proof constructed strategies so that (1) if players never cheat, punishment occurs only finitely often, and thus is negligible, and (2) an infinite number of deviations is very likely to trigger a substantial punishment. Since no finite number of deviations can increase the time-average payoff, in equilibrium no one cheats yet the punishment costs are negligible. Since the inefficiencies in repeated partnerships and games with short-run players both stem from the need for punishments along the equilibrium path, it is not surprising that the inefficiencies in our model also disappear when players are completely patient. We prove this with a variant of the "target strategies" we used in Section 4. These strategies differ from Radner 's in that even if player 1 plays the equilibrium strategy, she will be punished infinitely often with probability one. However, along the equilibrium path the frequency of punishment converges to zero, so that as in Radner the punishment imposes zero cost. Proposition 5^: Imagine that player 1 evaluates payoff streams with the t=t-l criterion lim inf E (1/T) ) g {s{t)). Then for all v cv there is a T CD ts'o subgame-perf ect equilibrium with payoffs v. 21

32 Remark: The proof is based on a strong law of large numbers for martingales 2/ with independent increments, which we extend to cover the difference between a supermartingale and its lowest value to date. The relevant limit theory is developed in the Appendix. Proof : As in Proposition 4, we use different strategies for payoffs above and below some fixed static equilibrium o. Imagine that v. exceeds player I's payoff in this equilibrium, and normalize v, = 0. Let o be the (possibly mixed) strategy in graph (B) that maximizes player one's expected payoff, and define g^(o') = v ^^_^ T-1 Define Jq = and J = \ g (s (t),s_ (t) ). (This differs from the t = definition in Section 4, where we used player I's realized action and the mixed strategy of her opponents in defining J^). Note that player I's objective function is lim inf E(1/T)J -, Set o* {\i^) = < o if J >o o if J <0 Ve claim that (i) no matter how player 1 plays, her payoff is bounded by v, and (ii) that by following s. player 1 can attain payoff v. almost surely (and hence in expectation.) To prove this, let -o{h ) be an arbitrary strategy for player 1, and fix the associated probability distribution over infinite-horizon histories. For each history, let R (h ) = Its t-1 I * -o {h^)=<j] be the "reward" periods, and let 22

33 P (h ) = lt<t-l I <!> (h^) = o\ be the "punishment" ones. Then let M (h ) = g. (x(t)) be the sum of player I's payoffs in the good J T e R periods, and set N^(h^)= ^g^{x{t)) T ep t Note that the reward and punishment sets and the associated scores are defined path wise, i.e. they depend on the history h ; henceforth, though, we will omit the history h from the notation. Finally define M = max M, N = min ^ ^ T<t T<t N^, and v =min g (f^) We claim that for all t. (5) v^+ (K^-K^) ^ J^ ^ v^+ (N^-N^). This is clearly true for t = 0. Assume (5) holds for all T<t. At the start of period t, either (a) J >0 or (b) J ^ 0. In case (a), J. ^ Y^ +(K^-K^). Also, since <> (h ) = o, ^i''''-^t ~^1" J^^i = J.+ N.^,-N^s ;^+N -N. i v, + (N,^,-KV^. ), so that (5) is satisfied. In case (b), J^^^. ^^^J^. v^ <- v^^{n^^^-n^^^), and J^^^ = J^+ K^^^-K^. v^ +Kt+r'^t ' ^+ ("t+l'^t+l^ ' so once again (5) is satisfied. 23

34 Lemmas 3 in the Appendix shows that (N -N )/t converges to zero almost surely. Since the per-period payoffs are uniformly bounded, this implies that limsup {1/T)J_ s almost surely, and since the per-period payoffs are uniformly bounded, limsup 1/T E(J ) ^ as well. Lemma 4 shows that if player 1 plays so that M is a submartingale, then the (M -M) converges to zero as well. Since this is true when player 1 follows s,, the result follows. Q.E.D. We can show that with our strategies, player 1 is punished infinitely often (J > 0) with probability one. This contrasts with Radner's construction of efficient equilibria for symmetric time-average partnership games, where the probability of infinite punishment is zero. It seems likely that our "target-strategy" approach provides another way of constructing efficient equilibria for those games; it would be interesting to know whether this could be extended to asymmetric partnerships. Our approach has the benefit of making more clear why the construction cannot be extended to the discounting case. It also avoids the need to invoke the law of the iterated logarithm, which may make the proof more intuitive, although we must use the strong law for martingales in its place. 6. Several Lon g Run Players with Unobservable Kixed Strategies The case of several long-run players is more complex, and we have not completely solved it. As before, we can construct mixed-strategy equilibria in which the long-run players do better than in any pure strategy equilibrium, and once again they cannot do as well as if their mixed strategies were directly observable. However, we do not have a general characterization of the enforceable payoffs. Instead, we offer an example of payoffs that cannot 24

35 . be enforced, and a very restrictive condition that suffices for enforceability. Figure 2 presents a 3-player version of the game in Figure 1. Row's and Col's choices and payoffs are exactly as before. The third player, DUMKY, who is a long-run player, receives 3 if Col plays L and receives otherwise. The feasible payoffs for Row and DUMMY are depicted in Figure 2. Consider the feasible point at which p = 1/2 and Col plays L. Here Row and Dummy both receive 3. The argument of Section 3 shows that Row's best equilibrium payoff is not 3 but 2, which is the minimum of payoff over the actions in the support of her mixed strategy. Dummy is not mixing, so Dummy's minimum payoff over the support of her strategy is 3. (Indeed this is the minimum over the support of the produce of the two strategies.) Thus one might hope that, by analogy to the proof of Proposition 3, we could show that the payoffs (2,3) were enforceable. But these payoffs are not even feasible! The highest 1 9 G Dummy's payoff can be when Row's payoff is 2 is 2 -r-rrr. (See Figure 3, which / U depicts the feasible set.) The problem is that an equilibrium in which Row usually randomizes must sometimes have Col play M or R to "tax away" Row's "excess gains" from playing U instead of D, and this "tax" imposes a cost on Dummy 25

36 . 2,3,0. U D 4,3,0 0.0,1-1,0, ,1,1 0,0,3 M Player two is a "dummy", player three chooses COLs. Player 2's payoff 3 - (^T5T-^) (T5i. ) Player I's payoff Feasible set when three plays a SR best response Figure "J

37 Next, consider the game in Figure 4. u 1, 1, 1 2, 0, 1 1, 1, -99 2, 0, 1 D 0, 2, 1-1, -1,' -9 0, 2, 1-1, -1, -1 14, 4, 14, 2, 12, 4, 12, 2, Figure 4 In this game, player 1 chooses Rows, player 2 chooses Columns, and player 3 chooses matrices; players 1 and 2 are long-lived while player 3 is short-lived. The unique one-shot equilibrium is (U,L,A) with payoff (1,1,1). The long-lived players can obtain a higher payoff if they induce player 3 to choose C, which requires both long-run players to use mixed strategies. [Let p = prob U; q = prob L, then player 3 chooses C if (1-p) (1-q) ^ 1/10 and pq ^ 1/100]. For example, if both long-run players use randomization the payoffs are (13,3,0). Call this strategy cr = (a o ). 26

38 ' Now let us explain how to enforce (2,2) as an equilibrium payoff. It will be clear that the construction we develop is somewhat more general than the example; we do not give the general version because it does not lead to a complete characterization of the equilibrium payoffs. As in the proofs of Propositions 1 and 2, the strategies we construct depend on the history of the game only through a number of "state variables," with the current state determined by last period's state and last period's outcome through a (commonly known) transition rule. Let D and R be the "first" strategies, denoted s. (1), and s (l), and let U and L be the second, s. (2) and s (2). Play begins in state 0. In this state, each player pays o, which gives equal probability weight to his two actions. If player 1 plays action j, and player 2 action k, the next period state is (j,k). The payoffs when beginning play in state (j,k) are denoted ct(k,k) = (ex (j,k), a {j,k)). In our construction, each player's continuation payoff will be independent of his opponents' last move, so that ct (j,k) = a (j) and a (j,k) = 2^^'^' filially in each state, the «.'s and the specified transition rule will be such that each player is indifferent between his pure strategies and thus is willing to randomize. We will find it convenient to first define the a's, and then construct the associated strategies. Set the a's so that v^ = (1-6) g^{e^(j), cr^) + oa^(j) (6) In our example, a^(l) solves 2 = (1-6) (12) + 6a^(l), so cx^(l) = 12-10/6. Similar computations yield a (2) = 14-12/6, tx (i) = 4-2/6, and 27

39 . CX2(2) = 2. If the observed play is (s. (1), s.(l)), next period's state is (1,1), with payoffs a(l,i). Here play depends on a public randomization as follows. Choose a point w in the set P of payoffs attainable without private randomizations, and a probability p (0,l), such that (7) p(l-5)w + (l-p)v = {l-6p)a{l,i). (This is possible for S sufficiently near to one. The general version of this construction imposes a requirement that guarantees (7) can be satisfied). With probability p, players play the strategies that yield w, and the state remains at (1,1). With complementary probability, they play the mixed strategies (f.,f ). The continuation payoffs are exact ly as at state 0. Thus, the payoff to player i of choosing strategy x(j) is (8) p[(l-6)w. + 5a. (1)] + (i-p) [(l-6)g^s. (j), a_.) + 6a. (j)] = p[(l-6)w^ + 6a^(l)] + (l-p)[g^(e^(l), a_^)] =a.(l), for all strategies s (j )esupp(a. ). Once again, if any long-run player chooses an action not in the support, play reverts to the static Nash equilibrium. Now we must specify strategies at states other than (1,1). The state (2,2) occurs if the players chose their most preferred strategic (U,L) Choose a point w' in P and a p'e;(0,l) such that (9) p"(l-5)w' + (l-p')v= (l-5p') a(c^,c2) 28

40 (Once again this is possible for 6 near to one.) And again, play depends on a public randomization, switching to w' with probability p' and otherwise following (a, a.). At state (1,2), play switches to a point in P for one period with probability p to normalized payoffs out to a{l,2). With complementary probability, players once again play the mixed strategies <^ and the same continuation payoffs are used. State (2,1) is symmetric. Now let us argue that the constructed strategies are an equilibrium for 5 sufficiently large. First, if there are no deviations, the payoffs starting in state (j,k) are cx,(j,k). If player i deviates to an action outside of the support of a., or if either player deviates when the strategies say to play a pure strategy point, the deviation is detected, and play reverts to a static equilibrium. For 6 near to one all of the a's exceed to static equilibrium payoff, and so lor sufficiently large S no player will choose such a deviation. And, by construction, players are indifferent between the actions in the support of their mixed strategy, if they plan to always conform in the future. Then by the principle of optimality, no arbitrary sequence of unilateral deviations is profitable. 29

41 1 32 ' violates APPENDIX In this appendix we consider discrete-parameter martingales Ix,F I n = n n 0,1,, where {F \ is a filtration on an underlying probability space. We assume that x,. = 0. Lemma A.l. Let Ix,F 1 be a martingale sequence with bounded increments. n n (That is, for some number B, Ix - x,! s = B, almost surely.) Then lim x /n ' n n-1 ^ n almost surely. A proof of this lemma can be found in Hall and Heyde (1980, n-kc page 36f f ). We also use the following standard adaptation of this strong law: Lemma A. surely. For IK,F as above, let X = minx.. Then limx /n = almost l<n n- On Proof: Since x = 0, X ^0 for all n. Fix a sample of the stochastic process. Since X /n ^ 0, we only have to show that lim inf X /n = 0. Suppose, instead, that n. is a subsequence along which the limit is less than For each n., there is m. 2 n. with x = X, and thus > X /n. = x /n m. n. n.im.i ^ x^ /m.. Hence, along the subsequence Im.}, x /m. m. 1 m which can happen only on a null set. the strong law,. Lemma A. Let Ix,F 1 be a supermartingale with bounded increments and with x- = 0. Let!X^1 be defined from Ix 1 as in lemma 2. Then lim(x - X ) /n = u n n n n n-«d almost surely. 30

42 4 Pr oof: Since x^ X, we only need to show that the limsup of the sequence is nonpositive. For n = 1,..., let K = x - x,, and let K = K n n-1 n E(!? If.). Note that K ^ K. Let y = l", C, and let Y = inf y.;i = n' n-1 n n n i=l n n -^i l,...,nl. Then, immediately, ly,f 1 is a martingale sequence with bounded increments, and lemmas 1 and 2 tell us that lim y /n = lim Y /n = 0, and thus lim(y - Y )/n = 0. We are done, therefore, once we show that x - X s y n n n n n - Y point wise. But this is easily done by induction. It is clearly true for n = by convention. Assume it holds for n-1; then since K - K, n n X,-X, + K sy,-y,+r,or n-1 n-1 n n-1 n-1 n X - X ^ ^ y - Y,. n n-1 'n n-1 If X = X,, then since Y, ^ Y, we are done. While if X " "^., then X n n-1 n-1 n n n-1 n x,andx -X ^y -Y. Q.E.D. n n n 'n n A symmetrical argument completes the proof, and we obtain; Lemma A. : Let Ix,F 1 be a submartingale ^ with bounded increments and x = n n n 0. Let X = max Ix. li = 1,,nl. Then lim (X - x ) /n = almost surely. n 1 n-wo n n 31

43 FOOTNOTES 1. The required discount factor can depend on the payoffs to be attained, 2. We thank Ian Johnstone for pointing us to this result. 32

44 REFERENCES Abreu, D. [1986] "On the Theory of Infinitely Repeated Gaines with Discounting", mimeo, Dybvig, P. and C. Spatt [1980] "Does it Pay to Maintain a Reputation?", mimeo. Fudenberg, D. and D. Levine [1983] "Subgame-Perf ect Equilibria of Finite and Infinite Horizon Games", Journal of Economic Theor y, 31, Fudenberg, D., and E. Maskin [1986] "The Folk Theorem in Repeated Games with Discounting or With Incomplete Information," Econometrica, 54, [1987a] "Discounted Repeated Games with One-Sided Moral Hazard", Mimeo, Harvard University. Mimeo. [1987b] "Nash and Perfect Equilibria of Discounted Repeated Games," [1987c] "On the Dispensability of Public Randomizations in Discounted Repeated Games," mimeo. Hall, P. and C.C. Heyde, [1980] Marti ngale Theory and Its Applications, Academic Press, New York. Kreps, D. [1984] "Corporate Culture." Mimeo, Stanford Business School. Radner, R. [1986] "Repeated Partnership Games with Imperfect Monitoring and no Discounting." Review of Economic Studies 53, Radner, R., R. Myerson, and E. Maskin [1986] "An Example of a Repeated Partnership Game with Discounting and With Uniformly Inefficient Equilibria." Review of Economic Studies 53, Selten, R. [1977] "The Chain-Store Paradox.", Theory and Decision, 9, Shapiro, C. [1982] "Consumer Information, Product Quality, and Seller Reputation," Bell Journal of Economics, 13, Simon, H. [1951] "A Formal Theory of the Employment Relationship," Econometric a, 19, U84 33

45

46

47

48 Date Due Ss ^\f 'J 9 Hi 8

49 MIT LIBRARIES 3 TOaO DDS 3St. S'^M

50

Repeated Games with Perfect Monitoring

Repeated Games with Perfect Monitoring Repeated Games with Perfect Monitoring Mihai Manea MIT Repeated Games normal-form stage game G = (N, A, u) players simultaneously play game G at time t = 0, 1,... at each date t, players observe all past

More information

MA300.2 Game Theory 2005, LSE

MA300.2 Game Theory 2005, LSE MA300.2 Game Theory 2005, LSE Answers to Problem Set 2 [1] (a) This is standard (we have even done it in class). The one-shot Cournot outputs can be computed to be A/3, while the payoff to each firm can

More information

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015. FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.) Hints for Problem Set 3 1. Consider the following strategic

More information

Game Theory. Wolfgang Frimmel. Repeated Games

Game Theory. Wolfgang Frimmel. Repeated Games Game Theory Wolfgang Frimmel Repeated Games 1 / 41 Recap: SPNE The solution concept for dynamic games with complete information is the subgame perfect Nash Equilibrium (SPNE) Selten (1965): A strategy

More information

Boston Library Consortium Member Libraries

Boston Library Consortium Member Libraries Digitized by the Internet Archive in 2011 with funding from Boston Library Consortium Member Libraries http://www.archive.org/details/nashperfectequiloofude «... HB31.M415 SAUG 23 1988 working paper department

More information

February 23, An Application in Industrial Organization

February 23, An Application in Industrial Organization An Application in Industrial Organization February 23, 2015 One form of collusive behavior among firms is to restrict output in order to keep the price of the product high. This is a goal of the OPEC oil

More information

CHAPTER 14: REPEATED PRISONER S DILEMMA

CHAPTER 14: REPEATED PRISONER S DILEMMA CHAPTER 4: REPEATED PRISONER S DILEMMA In this chapter, we consider infinitely repeated play of the Prisoner s Dilemma game. We denote the possible actions for P i by C i for cooperating with the other

More information

Infinitely Repeated Games

Infinitely Repeated Games February 10 Infinitely Repeated Games Recall the following theorem Theorem 72 If a game has a unique Nash equilibrium, then its finite repetition has a unique SPNE. Our intuition, however, is that long-term

More information

Economics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Repeated games OR 8 and 9, and FT 5

Economics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Repeated games OR 8 and 9, and FT 5 Economics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Repeated games OR 8 and 9, and FT 5 The basic idea prisoner s dilemma The prisoner s dilemma game with one-shot payoffs 2 2 0

More information

Microeconomic Theory II Preliminary Examination Solutions

Microeconomic Theory II Preliminary Examination Solutions Microeconomic Theory II Preliminary Examination Solutions 1. (45 points) Consider the following normal form game played by Bruce and Sheila: L Sheila R T 1, 0 3, 3 Bruce M 1, x 0, 0 B 0, 0 4, 1 (a) Suppose

More information

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015. FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.) Hints for Problem Set 2 1. Consider a zero-sum game, where

More information

In reality; some cases of prisoner s dilemma end in cooperation. Game Theory Dr. F. Fatemi Page 219

In reality; some cases of prisoner s dilemma end in cooperation. Game Theory Dr. F. Fatemi Page 219 Repeated Games Basic lesson of prisoner s dilemma: In one-shot interaction, individual s have incentive to behave opportunistically Leads to socially inefficient outcomes In reality; some cases of prisoner

More information

Repeated Games. September 3, Definitions: Discounting, Individual Rationality. Finitely Repeated Games. Infinitely Repeated Games

Repeated Games. September 3, Definitions: Discounting, Individual Rationality. Finitely Repeated Games. Infinitely Repeated Games Repeated Games Frédéric KOESSLER September 3, 2007 1/ Definitions: Discounting, Individual Rationality Finitely Repeated Games Infinitely Repeated Games Automaton Representation of Strategies The One-Shot

More information

Stochastic Games and Bayesian Games

Stochastic Games and Bayesian Games Stochastic Games and Bayesian Games CPSC 532l Lecture 10 Stochastic Games and Bayesian Games CPSC 532l Lecture 10, Slide 1 Lecture Overview 1 Recap 2 Stochastic Games 3 Bayesian Games 4 Analyzing Bayesian

More information

G5212: Game Theory. Mark Dean. Spring 2017

G5212: Game Theory. Mark Dean. Spring 2017 G5212: Game Theory Mark Dean Spring 2017 Bargaining We will now apply the concept of SPNE to bargaining A bit of background Bargaining is hugely interesting but complicated to model It turns out that the

More information

Finite Memory and Imperfect Monitoring

Finite Memory and Imperfect Monitoring Federal Reserve Bank of Minneapolis Research Department Staff Report 287 March 2001 Finite Memory and Imperfect Monitoring Harold L. Cole University of California, Los Angeles and Federal Reserve Bank

More information

Lecture 5 Leadership and Reputation

Lecture 5 Leadership and Reputation Lecture 5 Leadership and Reputation Reputations arise in situations where there is an element of repetition, and also where coordination between players is possible. One definition of leadership is that

More information

Finite Memory and Imperfect Monitoring

Finite Memory and Imperfect Monitoring Federal Reserve Bank of Minneapolis Research Department Finite Memory and Imperfect Monitoring Harold L. Cole and Narayana Kocherlakota Working Paper 604 September 2000 Cole: U.C.L.A. and Federal Reserve

More information

Introduction to Game Theory Lecture Note 5: Repeated Games

Introduction to Game Theory Lecture Note 5: Repeated Games Introduction to Game Theory Lecture Note 5: Repeated Games Haifeng Huang University of California, Merced Repeated games Repeated games: given a simultaneous-move game G, a repeated game of G is an extensive

More information

Appendix: Common Currencies vs. Monetary Independence

Appendix: Common Currencies vs. Monetary Independence Appendix: Common Currencies vs. Monetary Independence A The infinite horizon model This section defines the equilibrium of the infinity horizon model described in Section III of the paper and characterizes

More information

Stochastic Games and Bayesian Games

Stochastic Games and Bayesian Games Stochastic Games and Bayesian Games CPSC 532L Lecture 10 Stochastic Games and Bayesian Games CPSC 532L Lecture 10, Slide 1 Lecture Overview 1 Recap 2 Stochastic Games 3 Bayesian Games Stochastic Games

More information

Game Theory Fall 2006

Game Theory Fall 2006 Game Theory Fall 2006 Answers to Problem Set 3 [1a] Omitted. [1b] Let a k be a sequence of paths that converge in the product topology to a; that is, a k (t) a(t) for each date t, as k. Let M be the maximum

More information

6.254 : Game Theory with Engineering Applications Lecture 3: Strategic Form Games - Solution Concepts

6.254 : Game Theory with Engineering Applications Lecture 3: Strategic Form Games - Solution Concepts 6.254 : Game Theory with Engineering Applications Lecture 3: Strategic Form Games - Solution Concepts Asu Ozdaglar MIT February 9, 2010 1 Introduction Outline Review Examples of Pure Strategy Nash Equilibria

More information

MA200.2 Game Theory II, LSE

MA200.2 Game Theory II, LSE MA200.2 Game Theory II, LSE Answers to Problem Set [] In part (i), proceed as follows. Suppose that we are doing 2 s best response to. Let p be probability that player plays U. Now if player 2 chooses

More information

Optimal selling rules for repeated transactions.

Optimal selling rules for repeated transactions. Optimal selling rules for repeated transactions. Ilan Kremer and Andrzej Skrzypacz March 21, 2002 1 Introduction In many papers considering the sale of many objects in a sequence of auctions the seller

More information

Warm Up Finitely Repeated Games Infinitely Repeated Games Bayesian Games. Repeated Games

Warm Up Finitely Repeated Games Infinitely Repeated Games Bayesian Games. Repeated Games Repeated Games Warm up: bargaining Suppose you and your Qatz.com partner have a falling-out. You agree set up two meetings to negotiate a way to split the value of your assets, which amount to $1 million

More information

Relational Incentive Contracts

Relational Incentive Contracts Relational Incentive Contracts Jonathan Levin May 2006 These notes consider Levin s (2003) paper on relational incentive contracts, which studies how self-enforcing contracts can provide incentives in

More information

Economics 171: Final Exam

Economics 171: Final Exam Question 1: Basic Concepts (20 points) Economics 171: Final Exam 1. Is it true that every strategy is either strictly dominated or is a dominant strategy? Explain. (5) No, some strategies are neither dominated

More information

PAULI MURTO, ANDREY ZHUKOV

PAULI MURTO, ANDREY ZHUKOV GAME THEORY SOLUTION SET 1 WINTER 018 PAULI MURTO, ANDREY ZHUKOV Introduction For suggested solution to problem 4, last year s suggested solutions by Tsz-Ning Wong were used who I think used suggested

More information

1 x i c i if x 1 +x 2 > 0 u i (x 1,x 2 ) = 0 if x 1 +x 2 = 0

1 x i c i if x 1 +x 2 > 0 u i (x 1,x 2 ) = 0 if x 1 +x 2 = 0 Game Theory - Midterm Examination, Date: ctober 14, 017 Total marks: 30 Duration: 10:00 AM to 1:00 PM Note: Answer all questions clearly using pen. Please avoid unnecessary discussions. In all questions,

More information

Online Appendix for Military Mobilization and Commitment Problems

Online Appendix for Military Mobilization and Commitment Problems Online Appendix for Military Mobilization and Commitment Problems Ahmer Tarar Department of Political Science Texas A&M University 4348 TAMU College Station, TX 77843-4348 email: ahmertarar@pols.tamu.edu

More information

GAME THEORY. Department of Economics, MIT, Follow Muhamet s slides. We need the following result for future reference.

GAME THEORY. Department of Economics, MIT, Follow Muhamet s slides. We need the following result for future reference. 14.126 GAME THEORY MIHAI MANEA Department of Economics, MIT, 1. Existence and Continuity of Nash Equilibria Follow Muhamet s slides. We need the following result for future reference. Theorem 1. Suppose

More information

Regret Minimization and Security Strategies

Regret Minimization and Security Strategies Chapter 5 Regret Minimization and Security Strategies Until now we implicitly adopted a view that a Nash equilibrium is a desirable outcome of a strategic game. In this chapter we consider two alternative

More information

The folk theorem revisited

The folk theorem revisited Economic Theory 27, 321 332 (2006) DOI: 10.1007/s00199-004-0580-7 The folk theorem revisited James Bergin Department of Economics, Queen s University, Ontario K7L 3N6, CANADA (e-mail: berginj@qed.econ.queensu.ca)

More information

Comparing Allocations under Asymmetric Information: Coase Theorem Revisited

Comparing Allocations under Asymmetric Information: Coase Theorem Revisited Comparing Allocations under Asymmetric Information: Coase Theorem Revisited Shingo Ishiguro Graduate School of Economics, Osaka University 1-7 Machikaneyama, Toyonaka, Osaka 560-0043, Japan August 2002

More information

INTERIM CORRELATED RATIONALIZABILITY IN INFINITE GAMES

INTERIM CORRELATED RATIONALIZABILITY IN INFINITE GAMES INTERIM CORRELATED RATIONALIZABILITY IN INFINITE GAMES JONATHAN WEINSTEIN AND MUHAMET YILDIZ A. We show that, under the usual continuity and compactness assumptions, interim correlated rationalizability

More information

Problem 3 Solutions. l 3 r, 1

Problem 3 Solutions. l 3 r, 1 . Economic Applications of Game Theory Fall 00 TA: Youngjin Hwang Problem 3 Solutions. (a) There are three subgames: [A] the subgame starting from Player s decision node after Player s choice of P; [B]

More information

Yao s Minimax Principle

Yao s Minimax Principle Complexity of algorithms The complexity of an algorithm is usually measured with respect to the size of the input, where size may for example refer to the length of a binary word describing the input,

More information

Competitive Outcomes, Endogenous Firm Formation and the Aspiration Core

Competitive Outcomes, Endogenous Firm Formation and the Aspiration Core Competitive Outcomes, Endogenous Firm Formation and the Aspiration Core Camelia Bejan and Juan Camilo Gómez September 2011 Abstract The paper shows that the aspiration core of any TU-game coincides with

More information

CUR 412: Game Theory and its Applications, Lecture 12

CUR 412: Game Theory and its Applications, Lecture 12 CUR 412: Game Theory and its Applications, Lecture 12 Prof. Ronaldo CARPIO May 24, 2016 Announcements Homework #4 is due next week. Review of Last Lecture In extensive games with imperfect information,

More information

Best-Reply Sets. Jonathan Weinstein Washington University in St. Louis. This version: May 2015

Best-Reply Sets. Jonathan Weinstein Washington University in St. Louis. This version: May 2015 Best-Reply Sets Jonathan Weinstein Washington University in St. Louis This version: May 2015 Introduction The best-reply correspondence of a game the mapping from beliefs over one s opponents actions to

More information

Strategies and Nash Equilibrium. A Whirlwind Tour of Game Theory

Strategies and Nash Equilibrium. A Whirlwind Tour of Game Theory Strategies and Nash Equilibrium A Whirlwind Tour of Game Theory (Mostly from Fudenberg & Tirole) Players choose actions, receive rewards based on their own actions and those of the other players. Example,

More information

University of Hong Kong ECON6036 Stephen Chiu. Extensive Games with Perfect Information II. Outline

University of Hong Kong ECON6036 Stephen Chiu. Extensive Games with Perfect Information II. Outline University of Hong Kong ECON6036 Stephen Chiu Extensive Games with Perfect Information II 1 Outline Interpretation of strategy Backward induction One stage deviation principle Rubinstein alternative bargaining

More information

MA200.2 Game Theory II, LSE

MA200.2 Game Theory II, LSE MA200.2 Game Theory II, LSE Problem Set 1 These questions will go over basic game-theoretic concepts and some applications. homework is due during class on week 4. This [1] In this problem (see Fudenberg-Tirole

More information

KIER DISCUSSION PAPER SERIES

KIER DISCUSSION PAPER SERIES KIER DISCUSSION PAPER SERIES KYOTO INSTITUTE OF ECONOMIC RESEARCH http://www.kier.kyoto-u.ac.jp/index.html Discussion Paper No. 657 The Buy Price in Auctions with Discrete Type Distributions Yusuke Inami

More information

Game Theory Fall 2003

Game Theory Fall 2003 Game Theory Fall 2003 Problem Set 5 [1] Consider an infinitely repeated game with a finite number of actions for each player and a common discount factor δ. Prove that if δ is close enough to zero then

More information

American Economic Association

American Economic Association American Economic Association Evolution and Cooperation in Noisy Repeated Games Author(s): Drew Fundenberg and Eric Maskin Source: The American Economic Review, Vol. 80, No. 2, Papers and Proceedings of

More information

A reinforcement learning process in extensive form games

A reinforcement learning process in extensive form games A reinforcement learning process in extensive form games Jean-François Laslier CNRS and Laboratoire d Econométrie de l Ecole Polytechnique, Paris. Bernard Walliser CERAS, Ecole Nationale des Ponts et Chaussées,

More information

Not 0,4 2,1. i. Show there is a perfect Bayesian equilibrium where player A chooses to play, player A chooses L, and player B chooses L.

Not 0,4 2,1. i. Show there is a perfect Bayesian equilibrium where player A chooses to play, player A chooses L, and player B chooses L. Econ 400, Final Exam Name: There are three questions taken from the material covered so far in the course. ll questions are equally weighted. If you have a question, please raise your hand and I will come

More information

Duopoly models Multistage games with observed actions Subgame perfect equilibrium Extensive form of a game Two-stage prisoner s dilemma

Duopoly models Multistage games with observed actions Subgame perfect equilibrium Extensive form of a game Two-stage prisoner s dilemma Recap Last class (September 20, 2016) Duopoly models Multistage games with observed actions Subgame perfect equilibrium Extensive form of a game Two-stage prisoner s dilemma Today (October 13, 2016) Finitely

More information

Maintaining a Reputation Against a Patient Opponent 1

Maintaining a Reputation Against a Patient Opponent 1 Maintaining a Reputation Against a Patient Opponent July 3, 006 Marco Celentani Drew Fudenberg David K. Levine Wolfgang Pesendorfer ABSTRACT: We analyze reputation in a game between a patient player and

More information

An introduction on game theory for wireless networking [1]

An introduction on game theory for wireless networking [1] An introduction on game theory for wireless networking [1] Ning Zhang 14 May, 2012 [1] Game Theory in Wireless Networks: A Tutorial 1 Roadmap 1 Introduction 2 Static games 3 Extensive-form games 4 Summary

More information

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India October 2012

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India October 2012 Game Theory Lecture Notes By Y. Narahari Department of Computer Science and Automation Indian Institute of Science Bangalore, India October 22 COOPERATIVE GAME THEORY Correlated Strategies and Correlated

More information

The Nash equilibrium of the stage game is (D, R), giving payoffs (0, 0). Consider the trigger strategies:

The Nash equilibrium of the stage game is (D, R), giving payoffs (0, 0). Consider the trigger strategies: Problem Set 4 1. (a). Consider the infinitely repeated game with discount rate δ, where the strategic fm below is the stage game: B L R U 1, 1 2, 5 A D 2, 0 0, 0 Sketch a graph of the players payoffs.

More information

March 30, Why do economists (and increasingly, engineers and computer scientists) study auctions?

March 30, Why do economists (and increasingly, engineers and computer scientists) study auctions? March 3, 215 Steven A. Matthews, A Technical Primer on Auction Theory I: Independent Private Values, Northwestern University CMSEMS Discussion Paper No. 196, May, 1995. This paper is posted on the course

More information

Economics and Computation

Economics and Computation Economics and Computation ECON 425/563 and CPSC 455/555 Professor Dirk Bergemann and Professor Joan Feigenbaum Reputation Systems In case of any questions and/or remarks on these lecture notes, please

More information

1 Appendix A: Definition of equilibrium

1 Appendix A: Definition of equilibrium Online Appendix to Partnerships versus Corporations: Moral Hazard, Sorting and Ownership Structure Ayca Kaya and Galina Vereshchagina Appendix A formally defines an equilibrium in our model, Appendix B

More information

Microeconomics II. CIDE, MsC Economics. List of Problems

Microeconomics II. CIDE, MsC Economics. List of Problems Microeconomics II CIDE, MsC Economics List of Problems 1. There are three people, Amy (A), Bart (B) and Chris (C): A and B have hats. These three people are arranged in a room so that B can see everything

More information

On the Lower Arbitrage Bound of American Contingent Claims

On the Lower Arbitrage Bound of American Contingent Claims On the Lower Arbitrage Bound of American Contingent Claims Beatrice Acciaio Gregor Svindland December 2011 Abstract We prove that in a discrete-time market model the lower arbitrage bound of an American

More information

Economics 502 April 3, 2008

Economics 502 April 3, 2008 Second Midterm Answers Prof. Steven Williams Economics 502 April 3, 2008 A full answer is expected: show your work and your reasoning. You can assume that "equilibrium" refers to pure strategies unless

More information

Repeated Games. Econ 400. University of Notre Dame. Econ 400 (ND) Repeated Games 1 / 48

Repeated Games. Econ 400. University of Notre Dame. Econ 400 (ND) Repeated Games 1 / 48 Repeated Games Econ 400 University of Notre Dame Econ 400 (ND) Repeated Games 1 / 48 Relationships and Long-Lived Institutions Business (and personal) relationships: Being caught cheating leads to punishment

More information

Outline for Dynamic Games of Complete Information

Outline for Dynamic Games of Complete Information Outline for Dynamic Games of Complete Information I. Examples of dynamic games of complete info: A. equential version of attle of the exes. equential version of Matching Pennies II. Definition of subgame-perfect

More information

Credible Threats, Reputation and Private Monitoring.

Credible Threats, Reputation and Private Monitoring. Credible Threats, Reputation and Private Monitoring. Olivier Compte First Version: June 2001 This Version: November 2003 Abstract In principal-agent relationships, a termination threat is often thought

More information

M.Phil. Game theory: Problem set II. These problems are designed for discussions in the classes of Week 8 of Michaelmas term. 1

M.Phil. Game theory: Problem set II. These problems are designed for discussions in the classes of Week 8 of Michaelmas term. 1 M.Phil. Game theory: Problem set II These problems are designed for discussions in the classes of Week 8 of Michaelmas term.. Private Provision of Public Good. Consider the following public good game:

More information

A folk theorem for one-shot Bertrand games

A folk theorem for one-shot Bertrand games Economics Letters 6 (999) 9 6 A folk theorem for one-shot Bertrand games Michael R. Baye *, John Morgan a, b a Indiana University, Kelley School of Business, 309 East Tenth St., Bloomington, IN 4740-70,

More information

Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets

Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets Nathaniel Hendren October, 2013 Abstract Both Akerlof (1970) and Rothschild and Stiglitz (1976) show that

More information

A Decentralized Learning Equilibrium

A Decentralized Learning Equilibrium Paper to be presented at the DRUID Society Conference 2014, CBS, Copenhagen, June 16-18 A Decentralized Learning Equilibrium Andreas Blume University of Arizona Economics ablume@email.arizona.edu April

More information

Finitely repeated simultaneous move game.

Finitely repeated simultaneous move game. Finitely repeated simultaneous move game. Consider a normal form game (simultaneous move game) Γ N which is played repeatedly for a finite (T )number of times. The normal form game which is played repeatedly

More information

Repeated Games. Debraj Ray, October 2006

Repeated Games. Debraj Ray, October 2006 Repeated Games Debraj Ray, October 2006 1. PRELIMINARIES A repeated game with common discount factor is characterized by the following additional constraints on the infinite extensive form introduced earlier:

More information

Exercises Solutions: Game Theory

Exercises Solutions: Game Theory Exercises Solutions: Game Theory Exercise. (U, R).. (U, L) and (D, R). 3. (D, R). 4. (U, L) and (D, R). 5. First, eliminate R as it is strictly dominated by M for player. Second, eliminate M as it is strictly

More information

ECON 803: MICROECONOMIC THEORY II Arthur J. Robson Fall 2016 Assignment 9 (due in class on November 22)

ECON 803: MICROECONOMIC THEORY II Arthur J. Robson Fall 2016 Assignment 9 (due in class on November 22) ECON 803: MICROECONOMIC THEORY II Arthur J. Robson all 2016 Assignment 9 (due in class on November 22) 1. Critique of subgame perfection. 1 Consider the following three-player sequential game. In the first

More information

Online Appendix for Debt Contracts with Partial Commitment by Natalia Kovrijnykh

Online Appendix for Debt Contracts with Partial Commitment by Natalia Kovrijnykh Online Appendix for Debt Contracts with Partial Commitment by Natalia Kovrijnykh Omitted Proofs LEMMA 5: Function ˆV is concave with slope between 1 and 0. PROOF: The fact that ˆV (w) is decreasing in

More information

6.896 Topics in Algorithmic Game Theory February 10, Lecture 3

6.896 Topics in Algorithmic Game Theory February 10, Lecture 3 6.896 Topics in Algorithmic Game Theory February 0, 200 Lecture 3 Lecturer: Constantinos Daskalakis Scribe: Pablo Azar, Anthony Kim In the previous lecture we saw that there always exists a Nash equilibrium

More information

Incentive Compatibility: Everywhere vs. Almost Everywhere

Incentive Compatibility: Everywhere vs. Almost Everywhere Incentive Compatibility: Everywhere vs. Almost Everywhere Murali Agastya Richard T. Holden August 29, 2006 Abstract A risk neutral buyer observes a private signal s [a, b], which informs her that the mean

More information

Discounted Stochastic Games with Voluntary Transfers

Discounted Stochastic Games with Voluntary Transfers Discounted Stochastic Games with Voluntary Transfers Sebastian Kranz University of Cologne Slides Discounted Stochastic Games Natural generalization of infinitely repeated games n players infinitely many

More information

Alternating-Offer Games with Final-Offer Arbitration

Alternating-Offer Games with Final-Offer Arbitration Alternating-Offer Games with Final-Offer Arbitration Kang Rong School of Economics, Shanghai University of Finance and Economic (SHUFE) August, 202 Abstract I analyze an alternating-offer model that integrates

More information

Renegotiation in Repeated Games with Side-Payments 1

Renegotiation in Repeated Games with Side-Payments 1 Games and Economic Behavior 33, 159 176 (2000) doi:10.1006/game.1999.0769, available online at http://www.idealibrary.com on Renegotiation in Repeated Games with Side-Payments 1 Sandeep Baliga Kellogg

More information

ECONS 424 STRATEGY AND GAME THEORY MIDTERM EXAM #2 ANSWER KEY

ECONS 424 STRATEGY AND GAME THEORY MIDTERM EXAM #2 ANSWER KEY ECONS 44 STRATEGY AND GAE THEORY IDTER EXA # ANSWER KEY Exercise #1. Hawk-Dove game. Consider the following payoff matrix representing the Hawk-Dove game. Intuitively, Players 1 and compete for a resource,

More information

Microeconomics of Banking: Lecture 5

Microeconomics of Banking: Lecture 5 Microeconomics of Banking: Lecture 5 Prof. Ronaldo CARPIO Oct. 23, 2015 Administrative Stuff Homework 2 is due next week. Due to the change in material covered, I have decided to change the grading system

More information

Answer Key: Problem Set 4

Answer Key: Problem Set 4 Answer Key: Problem Set 4 Econ 409 018 Fall A reminder: An equilibrium is characterized by a set of strategies. As emphasized in the class, a strategy is a complete contingency plan (for every hypothetical

More information

October 9. The problem of ties (i.e., = ) will not matter here because it will occur with probability

October 9. The problem of ties (i.e., = ) will not matter here because it will occur with probability October 9 Example 30 (1.1, p.331: A bargaining breakdown) There are two people, J and K. J has an asset that he would like to sell to K. J s reservation value is 2 (i.e., he profits only if he sells it

More information

10.1 Elimination of strictly dominated strategies

10.1 Elimination of strictly dominated strategies Chapter 10 Elimination by Mixed Strategies The notions of dominance apply in particular to mixed extensions of finite strategic games. But we can also consider dominance of a pure strategy by a mixed strategy.

More information

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2017

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2017 ECON 459 Game Theory Lecture Notes Auctions Luca Anderlini Spring 2017 These notes have been used and commented on before. If you can still spot any errors or have any suggestions for improvement, please

More information

Antino Kim Kelley School of Business, Indiana University, Bloomington Bloomington, IN 47405, U.S.A.

Antino Kim Kelley School of Business, Indiana University, Bloomington Bloomington, IN 47405, U.S.A. THE INVISIBLE HAND OF PIRACY: AN ECONOMIC ANALYSIS OF THE INFORMATION-GOODS SUPPLY CHAIN Antino Kim Kelley School of Business, Indiana University, Bloomington Bloomington, IN 47405, U.S.A. {antino@iu.edu}

More information

13.1 Infinitely Repeated Cournot Oligopoly

13.1 Infinitely Repeated Cournot Oligopoly Chapter 13 Application: Implicit Cartels This chapter discusses many important subgame-perfect equilibrium strategies in optimal cartel, using the linear Cournot oligopoly as the stage game. For game theory

More information

EC202. Microeconomic Principles II. Summer 2009 examination. 2008/2009 syllabus

EC202. Microeconomic Principles II. Summer 2009 examination. 2008/2009 syllabus Summer 2009 examination EC202 Microeconomic Principles II 2008/2009 syllabus Instructions to candidates Time allowed: 3 hours. This paper contains nine questions in three sections. Answer question one

More information

Game theory for. Leonardo Badia.

Game theory for. Leonardo Badia. Game theory for information engineering Leonardo Badia leonardo.badia@gmail.com Zero-sum games A special class of games, easier to solve Zero-sum We speak of zero-sum game if u i (s) = -u -i (s). player

More information

Price cutting and business stealing in imperfect cartels Online Appendix

Price cutting and business stealing in imperfect cartels Online Appendix Price cutting and business stealing in imperfect cartels Online Appendix B. Douglas Bernheim Erik Madsen December 2016 C.1 Proofs omitted from the main text Proof of Proposition 4. We explicitly construct

More information

On Forchheimer s Model of Dominant Firm Price Leadership

On Forchheimer s Model of Dominant Firm Price Leadership On Forchheimer s Model of Dominant Firm Price Leadership Attila Tasnádi Department of Mathematics, Budapest University of Economic Sciences and Public Administration, H-1093 Budapest, Fővám tér 8, Hungary

More information

Bargaining Order and Delays in Multilateral Bargaining with Asymmetric Sellers

Bargaining Order and Delays in Multilateral Bargaining with Asymmetric Sellers WP-2013-015 Bargaining Order and Delays in Multilateral Bargaining with Asymmetric Sellers Amit Kumar Maurya and Shubhro Sarkar Indira Gandhi Institute of Development Research, Mumbai August 2013 http://www.igidr.ac.in/pdf/publication/wp-2013-015.pdf

More information

ECE 586GT: Problem Set 1: Problems and Solutions Analysis of static games

ECE 586GT: Problem Set 1: Problems and Solutions Analysis of static games University of Illinois Fall 2018 ECE 586GT: Problem Set 1: Problems and Solutions Analysis of static games Due: Tuesday, Sept. 11, at beginning of class Reading: Course notes, Sections 1.1-1.4 1. [A random

More information

Ph.D. MICROECONOMICS CORE EXAM August 2018

Ph.D. MICROECONOMICS CORE EXAM August 2018 Ph.D. MICROECONOMICS CORE EXAM August 2018 This exam is designed to test your broad knowledge of microeconomics. There are three sections: one required and two choice sections. You must complete both problems

More information

Problem Set 2 Answers

Problem Set 2 Answers Problem Set 2 Answers BPH8- February, 27. Note that the unique Nash Equilibrium of the simultaneous Bertrand duopoly model with a continuous price space has each rm playing a wealy dominated strategy.

More information

Microeconomic Theory II Preliminary Examination Solutions Exam date: June 5, 2017

Microeconomic Theory II Preliminary Examination Solutions Exam date: June 5, 2017 Microeconomic Theory II Preliminary Examination Solutions Exam date: June 5, 07. (40 points) Consider a Cournot duopoly. The market price is given by q q, where q and q are the quantities of output produced

More information

Evaluating Strategic Forecasters. Rahul Deb with Mallesh Pai (Rice) and Maher Said (NYU Stern) Becker Friedman Theory Conference III July 22, 2017

Evaluating Strategic Forecasters. Rahul Deb with Mallesh Pai (Rice) and Maher Said (NYU Stern) Becker Friedman Theory Conference III July 22, 2017 Evaluating Strategic Forecasters Rahul Deb with Mallesh Pai (Rice) and Maher Said (NYU Stern) Becker Friedman Theory Conference III July 22, 2017 Motivation Forecasters are sought after in a variety of

More information

Game Theory: Normal Form Games

Game Theory: Normal Form Games Game Theory: Normal Form Games Michael Levet June 23, 2016 1 Introduction Game Theory is a mathematical field that studies how rational agents make decisions in both competitive and cooperative situations.

More information

Economics 431 Infinitely repeated games

Economics 431 Infinitely repeated games Economics 431 Infinitely repeated games Letuscomparetheprofit incentives to defect from the cartel in the short run (when the firm is the only defector) versus the long run (when the game is repeated)

More information

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2017

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2017 Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2017 The time limit for this exam is four hours. The exam has four sections. Each section includes two questions.

More information

Lecture 5: Iterative Combinatorial Auctions

Lecture 5: Iterative Combinatorial Auctions COMS 6998-3: Algorithmic Game Theory October 6, 2008 Lecture 5: Iterative Combinatorial Auctions Lecturer: Sébastien Lahaie Scribe: Sébastien Lahaie In this lecture we examine a procedure that generalizes

More information

Topics in Contract Theory Lecture 1

Topics in Contract Theory Lecture 1 Leonardo Felli 7 January, 2002 Topics in Contract Theory Lecture 1 Contract Theory has become only recently a subfield of Economics. As the name suggest the main object of the analysis is a contract. Therefore

More information