Economics 160 Professor Steven Tadelis Stanford University Spring Quarter, 2004 Notes for Section: Week 4 Notes prepared by Paul Riskind (pnr@stanford.edu). spot errors or have questions about these notes. Let me know if you 1 Proofs About Nash Equilibria In this class, you will often have to prove that all Nash equilibria for a game have a particular characteristic. Problem set 3 required you to complete several such proofs. A useful technique for proving such statements is the proof by contradiction. Recall that in a proof by contradiction, you assume the opposite of what you want to prove, and then show that the assumption leads to an impossibility. To prove by contradiction that the Nash equilibria of a game have a particular characteristic, do the following: 1. Assume that the equilibria do not have the characteristic in question. 2. Show that if the equilibria do not have the characteristic, then some player can profitably deviate from her strategy to another with a higher payoff. 3. Conclude that because some player can profitably deviate from her strategy, she s not playing a best response, and that therefore the strategy profile cannot be a Nash equilibrium. It follows that all equilibria must have the characteristic. 2 Using Subgame Perfect Equilibrium to Analyze the Credibility of Threats and Promises An easy application of subgame perfect equilibrium (SPE) is to study the credibility of threats and promises. A threat or promise is credible if you have an incentive to carry it out after the party to whom you ve made your threat or promise acts. 1
2.1 Example 1: Predatory Pricing The market for dog biscuits has two firms. Call one firm the predator (P ), and call the other firm the rival (R). The predator is considering whether to engage in predatory pricing the practice of pricing below cost to inflict losses on a rival and drive her out of the market in the hope of reaping monopoly profits later. The rival first decides whether to stay in the market (S) orexitthemarket (E). The predator observes the rival s decision and then either engages in predatory pricing against the rival (P ) or doesn t engage in predatory pricing (D). If the rival exits the market, the predator gains a monopoly and earns 1000, while the rival gets 0. If the rival stays in the market, the firms share the market and each earns a profit of 300. Predatory pricing costs each firm 400, however, so if the rival stays in the market for dog biscuits (S) and the predator engages in predatory pricing (P ), each earns a total of -100. In the following gametree,thepredator spayoff appears on top: R E S 1000 0 P P D -100-100 300 300 Let s identify all NE and SPE for this game. We ll ignore mixed strategies. We ll first identify all NE by converting this game to normal form. The 2
payoff matrix is: Rival E S Predator P 1000, 0 100, 100 D 1000, 0 300, 300 We ll again use the method of drawing a line below a set of payoffs when the predator is playing a best response and a line above a set of payoffs when the rival is playing a best response. Clearly, the game has two NE in pure strategies: (P, E) and (D, S). Now, let s identify the SPE for this game using the extensive form. This game has perfect information (all information sets are singletons), so we can use backward induction. Start with the predator s decision. If the rival stays in the market (S), the predator is strictly better off not engaging in predatory pricing (D), so she ll choose D. Now look at the rival s decision. Anticipating that the predator won t prey on her, the rival will stay in the market (S) so she can earn 300 rather than 0. Thus, the only strategy profile consistent with backward induction, and thus the only SPE, is (D, S). Why isn t (P, E) a SPE? In this strategy profile, the predator threatens to engage in predatory pricing if the rival stays in the market, and the rival heeds her threat and exits the market. It isn t sequentially rational for the predator to carry out her threat, however: If the rival stays in the market, the predator earns a higher payoff from not preying (D). The predator s threat isn t credible because she has no incentive to carry it out. 3
2.2 Example 2: Predatory Pricing With Credit Constraints In a slightly different game, however, the threat to engage in predatory pricing can be credible. Suppose that the two firms play the same game as before, but that this time, the rival has limited ability to borrow while it loses money. Assume that if the rival ever loses money, it must go bankrupt and leave the industry. Accordingly, if the predator engages in predatory pricing, it bankrupts the rival, ensuring that the predator will have a monopoly in the market for dog biscuits in the future. So now, if the predator preys upon the rival, both firms lose 400 during the period of predatory pricing, but afterwards, the predator gains the monopoly profit of 1000 and the rival gets 0 since she s out of the market. Thus, the predator s total payoff if the rival stays in the market and the predator preys is 1000-400 = 600, and the rival s payoff is 0-400 = -400. Thenewgametreeis: R E S 1000 0 P P D 600-400 300 300 4
Let s again find the NE and SPE for this game. The payoff matrix is: Rival E S Predator P 1000, 0 600, 400 D 1000, 0 300, 300 Now, the only NE is (P, E), and it s easy to show that this strategy profile is also the only SPE. In the subgame beginning with the predator s decision, the Nash equilibrium is for the predator to drive the rival out of the market with predatory pricing. Anticipating that the predator will make good on her threat, the prey will now exit the market without a fight. The difference between this game and the previous game is that in this game, it s actually profitable for the predator to follow through on her threat because she gains a monopoly when the rival goes bankrupt. So in this game, predatory pricing is a credible threat. The beauty of this credible threat is that because the threat is credible, the predator never has to use it! The bottom line: In dynamic games, players playing a NE can make threats or promises that are not credible, in the sense that players would not have incentives to carry out their threats or promises if necessary. This attribute is a clear defect of NE as a solution concept. SPE rules out threats and promises that aren t credible with its requirement that players choose actions that are sequentially rational at every node, even nodes that are not reached in equilibrium. 5
3 Complication #1: Subgames With Multiple Nash Equilibria The following example illustrates how to analyze a dynamic game that has multiple equilibria in one of its subgames. This sort of complication could easily arise in an exam or problem set. Consider the following game in extensive form: 1 L R 2 3 0 X Y 1 1 T B T B 3 3 6 2 2 6 1 1 First, let s identify all proper subgames of this game. Recall that a proper subgame is a set of nodes and corresponding payoffs satisfying two conditions: 6
1. A proper subgame must contain a single root node and all its successors. 2. A proper subgame must contain all nodes in a particular information set if it contains any node belonging to the information set. This game has two proper subgames. The first subgame is the subgame beginning with player 1 s choice between T and B. The second subgame is the entire game. Let s again ignore mixed strategies and find all NE and SPE in pure strategies for this game. We first find all NE by writing out the game in normal form: Player 2 X Y LT 3, 0 3, 0 Player 1 LB 3, 0 3, 0 RT 3, 3 2, 6 RB 6, 2 1, 1 We can quickly identify three NE in pure strategies: (LT, Y ), (LB, Y ), and (RB, X). Next, let s find all the SPE of this game. We can t apply backward induction to this game because it is not a game of perfect information not all information sets consist of single nodes. Instead, we must use the definition of a SPE it induces a NE in every proper subgame and identify the NE for every subgame. The first subgame, beginning at player 1 s choice between T and B, isa2x2 game with simultaneous moves: Player 2 X Y Player 1 T 3, 3 2, 6 B 6, 2 1, 1 This subgame has two NE in pure strategies: (T,Y ) and (B,X). Thus, in a SPE for the entire game, the only possible pairs of actions in this subgame are (T,Y ) and (B,X). Now consider the second subgame, which is the entire game. Player 1 chooses between L and R anticipating that if she chooses R, either(t,y ) or (B,X) will follow. If player 1 expects player 2 to play Y,thenplayer1 will choose L, sincel gives her 3 rather than 2. If player 1 expects player 2 to play X, thenplayer1 will choose R, sincer gives her 6 rather than 3. Thus, the only possible NE for this subgame are (LT, Y ) and (RB, X). We ve shown that (LT, Y ) and (RB, X) induce NE in both proper subgames, so we ve shown that the only two SPE for the entire game are (LT, Y ) and (RB, X). (LB, Y ) is a NE but not a SPE. Neither player is playing a best response in the subgame with simultaneous moves. Notice that one of the two SPE, (LT, Y ), is Pareto inferior to the other, (RB, X). Bothplayerswouldratherplay(RB, X). The Pareto inferior SPE, (LT, Y ), results from player 1 s expectation that player 2 would insist upon the equilibrium more favorable to 2 in the subgame with simultaneous moves. Player 1 therefore chooses L to avoid this subgame and obtain a higher payoff. 7
The bottom line: When a subgame has multiple NE, you must check for SPE inducing each of the possible NE in the subgame. 4 Complication #2: Subgames With Nash Equilibria in Mixed Strategies Let s reexamine the previous example, but this time allow the players to play mixed strategies in the subgame with simultaneous moves. (We could allow the players to play any mixed strategies, but the analysis would quickly get tedious.) Again, this complication has appeared on previous exams. We ll now check whether allowing mixed strategies in the subgame with simultaneous moves changes the set of NE for this game. We re now allowing player 1 to mix between actions T and B, and we re allowing 2 to mix between X and Y. Since player 1 can mix between actions T and B, shecanmix between strategies LT and LB, orbetweenrt and RB. Recall that a player only plays a mixed strategy in Nash equilibrium when she s indifferent toward the strategies she plays with positive probability and (weakly) prefers these strategies to any she does not play with positive probability. Accordingly, we must check whether player 1 can play a mixed strategy that makes player 2 indifferent toward her strategies and whether 2 can make 1 indifferent toward either LT and LB, orrt and RB. As usual, denote mixed strategies by vectors of probabilities assigned to pure strategies. We quickly see that there are two ways that player 1 can make player 2 indifferent. If player 1 plays any mix of LT and LB, thenplayer2 will receive thesameexpectedpayoff (0) from X and Y. Player 1 can also make player 2 indifferent by playing the mixed strategy σ 1 =(0, 0, 0.25, 0.75), whichgives player 2 apayoff of 2.25 from both her pure strategies. Now we ll see whether player 2 has a mixed strategy that will make player 1 willing to mix. If player 2 plays σ 2 =(0.25, 0.75), then1 will be indifferent between RT and RB, since both will yield a payoff of 2.25. 1 won t be willing to mix between RT and RB in equilibrium, however, because she gets a higher payoff (3) from either LT or LB. Any mixed strategy of player 2 makes player 1 indifferent between LT and LB, but we need to make sure that 1 does not prefer RT or RB to LT and LB when 2 plays her mixed strategy. If 2 plays σ 2 =(q, 1 q) with q [0, 1], thenplayer1 s payoffs from her pure strategies are: u 1 (LT ) = u 1 (LB) =3 u 1 (RT ) = 3q +2(1 q) =2+q u 1 (RB) = 6q +(1 q) =1+5q Player 1 will never strictly prefer RT toamixoflt and LB, since q [0, 1], u 1 (RT )=2+q 3=u 1 (LT )=u 1 (LB). Player 1 will strictly prefer RB to 8
amixoflt and LB, however,ifu 1 (RB) =1+5q>3, which happens when q>0.4. So player 1 will only mix between LT and LB when 2 plays the mixed strategy σ 2 =(q, 1 q) with q [0, 0.4]. We conclude that allowing the players to use mixed strategies in the subgame with simultaneous moves creates a continuum of new Nash equilibria of the form σ 1 =(p, 1 p, 0, 0), σ 2 =(q, 1 q) with p [0, 1], andq [0, 0.4]. We now check whether allowing mixed strategies changes the set of SPE. First, observe that allowing mixed strategies in the subgame with simultaneous moves gives this subgame a new NE. The players are both indifferent between their pure actions if and only if they play the profile of mixed actions bσ 1 =(0.25, 0.75) and bσ 2 =(0.25, 0.75). (Recall that we solve for equilibria in mixed strategies by first finding the set of mixed strategies for player i that make player j indifferent toward some of his pure strategies and then find the set of mixed strategies for j that make i indifferent toward some of his pure strategies.) If they play these mixed actions, then both players receive payoffs of 2.25 from each of their two pure actions, so both are indifferent and therefore willing to mix in equilibrium. This subgame has no other equilibria in mixed strategies. Second, observe that if the players play bσ 1 =(0.25, 0.75) and bσ 2 =(0.25, 0.75) in the subgame with simultaneous moves, then player 1 will choose L, giving her a payoff of 3 rather than the payoff of 2.25 from choosing R. Thus, allowing mixed strategies in the subgame with simultaneous moves creates only 1 new SPE: eσ 1 =(0.25, 0.75, 0, 0), andeσ 2 =(0.25, 0.75). (Youshouldbeable to express these mixed strategies as behavioral strategies!) All the other new NE are not SPE, because in these other new NE the players do not play best responses in the subgame with simultaneous moves. Bottom line: Watch out for equilibria in mixed strategies even in dynamic games, and look for SPE in which players mix in some subgames. 5 Complication #3: Continuous Action Spaces So far, we ve only analyzed examples in which the players had finite sets of actions at each information set. You should also, however, be prepared to solve problems in which the players have infinite sets of actions at some information sets. You and your dog own the only two firms in the dog biscuit industry. Dog biscuits are differentiated products. You and your dog compete by choosing prices. Let p Y be your price, and let your dog s price be p D. Your demand is D Y (p Y,p D )=A αp Y + βp D, and your dog s demand is D D (p D,p Y )= A αp D + βp Y. Notice that each firm s demand decreases with its own price and increases with its rival s price. Assume α>β>0, sothateachfirm s demand is more sensitive to its own price than to its rival s price. To keep the analysis simple, also assume that the firms only choose prices for which the 9
preceding expressions give non-negative demands. Assume both firms have zero costs, so the profit functions are: π Y (p Y,p D ) = p Y D Y (p Y,p D )=p Y (A αp Y + βp D ) π D (p D,p Y ) = p D D D (p D,p Y )=p D (A αp D + βp Y ) Throughout this problem, ignore mixed strategies. First, suppose you and your dog choose prices simultaneously. We find the best response functions by maximizing each firm s profit with respect to its own price, taking the other firm s price as given. The first order conditions are: π Y (p Y,p D ) p Y = A p Y + βp D =0 π D (p D,p Y ) p D = A p D + βp Y =0 (Throughout this example, second order conditions are satisfied because each firm s profit functionisconcaveinitsownprice.) Thebestresponsefunctions are: BR Y (p D ) = A + βp D BR D (p Y ) = A + βp Y In Nash equilibrium, both firms play best responses, so we solve the pair of equations p Y = BR Y (p D ) and p D = BR D (p Y ) to obtain p Y = p D = A β. This pair of prices is the unique Nash equilibrium. Now, suppose that you choose your price first, and then your dog chooses his price after observing your price. Let s find all the SPE in pure strategies. We ll also note the possibility of NE that are not SPE. Before we begin, ask yourself how many proper subgames this game has. Since a unique subgame follows every possible choice of price you can make, and since you can choose from infinitely many possible prices, there must be infinitely many proper subgames. Now ask yourself how many proper subgames the game had when you and your dog chose prices simultaneously. Since neither player knew the other s price, there was only one subgame the game as a whole. We conclude that a game with continuous strategy sets will often, but not always, have infinitely many proper subgames. Observe that this game has perfect information: All information sets consist of single nodes. We can therefore identify all SPE using backward induction. Your dog chooses his optimal price after observing your price. Given your price, his best response function provides his unique profit-maximizing price. He therefore sets p D = BR D (p Y )= A+βp Y. WehavefoundtheonlyNEofthe subgames beginning with your dog s choice of price. 10
We now look for NE of the only other subgame the game as a whole that induce NE in the subgames beginning with your dog s choice. Since we must have NE in the subgames beginning with your dog s choice, we know that you anticipate he ll choose p D = A+βpY. You therefore solve: maxπ Y (p Y,BR D (p Y )) = π Y (p Y, A + βp Y )=p Y (A αp Y + β( A + βp Y )) p Y Your first order condition is: A(1 + β β2 ) 2(α )p Y =0 You therefore choose p Y = A(+β). WehavenowfoundtheonlyNEinthe 4α 2 2β 2 game as a whole that induces a NE in every subgame i.e. the only SPE. It is: p Y = A( + β) 4α 2 2β 2 p D (p Y ) = A + βp Y With a bit more algebra, you can show that you choose a higher price in this game than in the game in which you and your dog choose prices simultaneously. Since you choose a higher price in this game, and your dog s best response function increases with your price, your dog also chooses a higher price. In effect, you commit to a high price by moving first, and your dog, secure in the knowledge that your price will be high, also chooses a high price. With still more algebra, we can show that your profit is lower than your dog s profit, so unlike in the sequential Cournot game, moving first in this game is a disadvantage. Nonetheless, both players are strictly better off in the sequential game than in the game with simultaneous moves! Here s a numerical example to illustrate these points. Assume the following parameter values: A = 30 α = 2 β = 1 In the pricing game with simultaneous moves, the outcome is: p Y = p D =10 q Y = q D =20 π Y = π D = 200 11
In the sequential pricing game in which you move first, the outcome is: p Y = 10.71 p D = 10.18 q Y = 18.83 q D = 20.35 π Y = 201.67 π D = 207.16 12