Second Midterm Answers Prof. Steven Williams Economics 502 April 3, 2008 A full answer is expected: show your work and your reasoning. You can assume that "equilibrium" refers to pure strategies unless the question indicates otherwise. 1. (30) Consider the following two person game: 1/2 a 2 b 2 a 1 3, 3 0, 4 b 1 4, 0 1, 1 a. Suppose first that the players discount their payoffs in the infinitely repeated game. Player i therefore calculates his payoff as t0 i t it, where i t is player i s discount factor and it is the payoff to player i in the tth repetition of the game. i. (10) Present a subgame perfect Nash equilibrium strategies in the infinitely repeated game that result in 3, 3 as the outcome of each stage on the equilibrium path. Be explicit in presenting the strategies. Your answer will involve presenting bounds on the discount factors that are sufficient to insure that your strategies are in fact an equilibrium. Hint: You should take into account the symmetry of the game; see also ii. that follows, in which you are asked to explain the subgame perfection of your answer. Player i uses the following strategy: Play a i to start the game and in stage t if the outcome has been a 1, a 2 in each preceding stage of the game; if the outcome in some preceding stage has been different from a 1, a 2 in some stage before t, then play b i in stage t and in all subsequent stages of the repeated game. Suppose that player i contemplates a deviation from this strategy in stage k by switching to b i. For the specified strategies to form a Nash equilibrium, it must be true that or t0 k1 t i 3 t i 3 t0 k i 4 t i 1, tk1
tk tk1 t1 t i 3 k t i 4 i tk1 i t 2 i k i t 2 1 2 i 1 i 1 2 i 1 i 3 i 1 i 1 3 Full credit is given for the use of a formula to derive the bound on i. ii. (5) Explain why your equilibrium is in fact subgame perfect. A history that precedes a subgame has either (1) the play of a 1, a 2 in each stage of the history, or (2) the play of some other strategies in some stage of the history. In case (1), the strategies form a Nash equilibrium in the subgame by virtue of the calculation in (i). In case (2), the players play b 1, b 2 in every stage of the subgame. Because b 1, b 2 is a Nash equilibrium of the stage game itself, playing b 1, b 2 in every stage is a Nash equilibrium of the subgame. b. Now suppose that each player evaluates his payoff in the infinitely repeated game by taking a limiting average value. Player i s payoff is therefore lim inf k t0 it k 1 where it is the payoff to player i in the tth repetition of the game (the "k 1" reflects the fact that the first play of the game is stage 0). i. (5)Present subgame perfect Nash equilibrium strategies in the infinitely repeated game that result in 1 3 0, 4 1 3 3, 3 1 4, 0 3 as the average payoff vector for the two players. Define a cycle for player 1 as (1) choose a 1 (2) choose a 1 (3) choose b 1 and for player 2 as (1) choose b 2 (2) choose a 2 (3) choose a 2 Each player starts the game at (1) in stage 0, and continues to follow the cycle (2), (3), (1),... The exception is any history in which either player has failed to follow his cycle. In this case, each player i switches to b i for every stage of the game following the history.
ii. (5) Draw a graph of all of the vectors that can be the average payoffs in subgame perfect Nash equilibria of the infinitely repeated game. Be sure to label all points of interest. iii. (5) Explain in words why 0, 4 cannot be the average payoff vector for a Nash equilibrium of the infinitely repeated game. Suppose we had a Nash equilibrium pair of strategies that resulted in player 1 receiving 0 as his average payoff in the infinitely repeated game. By switching to the strategy in which he plays b 1 in every stage, player 1 would increase his average payoff to at least 1. The original pair of strategies thus could not be a Nash equilibrium. 2. (20) Consider the principal-agent model discussed in class in which the action of the agent is observable but the disutility of his effort is not. There are two possible states, H and L, with the probability of state H equal to. The game begins by the principal offering the agent a pair of contracts e H, w H and e L, w L, each specifying an action and a wage in a state. The agent can accept or reject the contract; if he rejects it, then he receives his reservation utility u. If he accepts it, the state is then realized and observed by the agent. He then chooses an action e 0. The action e determines a profit e for the principal, and the principal s payoff in the game is this profit minus the wage he pays to the agent. The utility of the agent is uw, e vw ge,, where v 0, v 0, and ge, H ge, L. a. (10) Suppose that both the principal and the agent observe both the state and the action e. Formally state the optimization problem of the principal. Note: you are not asked to reduce or solve this problem. max e H,w H,e L,w L e H w H 1 e L w L s. t.: vw H ge H, H 1 vw L ge L, L u b. (10) Now suppose that only the agent observes the state, while both the principal and the agent observe the action e. After the agent observes the state, he is asked to report it to the principal in order to determine which of the pairs e H, w H or e L, w L will be applied to the agent. Present the two incentive constraints that are added to your optimization problem in a. that insure honest reporting of the state by the agent. vw H ge H, H vw L ge L, H vw L ge L, L vw H ge H, L 3. (30) Consider the signaling game of depicted below. Nature begins the game by
determining the type of player 1, who is "weak" with probability 0.9 and "strong" with probability 0.1. The "weak" player 1 prefers "Quiche" (Q) over "Beer" (B) for breakfast, while the "strong" player 1 prefers "Beer" over "Quiche". The choice of breakfast by player 1 is observed by player 2, who then decides whether to fight (F) player 1 or leave him alone (N). The payoffs are as indicated. Because only player 1 knows whether or not he is "weak" or "strong", the example considers the possibility that player may want to signal that he is "strong" by having "Beer" for breakfast, thus detering player 2. a. (10) Devise a sequential equilibrium that is a separating equilibrium, or show that no such equilibrium exists. First consider the case in which the "strong" type chooses "Beer" and the "weak" type chooses "Quiche":
This is not an equilibrium because the "weak" player 1 would switch from Q to B. Now assume the "strong" type chooses "Quiche" and the "weak" type chooses "Beer": This is not an equilibrium because the "weak" player 1 would switch from B to Q. No separating equilibria exist. b. (10) Devise a sequential equilibrium that is a pooling equilibrium in which both the "weak and the "strong" player 1 choose "Beer" for breakfast..
We need to determine the action and the off-the-equilibrium path beliefs of player 2 at the top information set. His expected payoff from choosing N is p and his expected payoff from choosing F is 1 p. Player 2 therefore chooses N if p 1/2 and F if p 1/2. If player 2 chooses N at the top information set, then B will not be a best choice for player 1 when he is "weak". Consequently, we must look for an equilibrium in which he chooses F and p 1/2. Any such p completes the definition of a sequential equilibrium. c. (10) Devise a sequential equilibrium that is a pooling equilibrium in which both the "weak and the "strong" player 1 choose "Quiche" for breakfast.
We need to determine a value of p that supports 2 s choice of F at the bottom information set. His expected payoff from N is p and his expected payoff from F is 1 p, and so choosing F requires p 1/2. This completes the definition of the pooling equilibrium. 4. (20) A seller has a car that he may sell to a buyer. The quality of the car is 0, 1. The value of the car to the seller is ; his utility if he sells the car is p, where p is the money he receives for it, and his utility is if he retains the car. A buyer or a seller wishes to have at most one car. The buyer s utility for the car if he buys it and pays p is 3 p, 2 and his utility is 0 if he does not buy the car. The seller alone knows the quality of his car. The buyer believes that is uniformly distributed on 0, 1. a. (5) Given a value of, determine the "supply for cars" in this market with two traders: at each price p 0, is the seller willing to sell the car, given the value of? The seller is willing to sell his car if p. b. (5) Determine the "demand for cars" in this market with two traders: at each price p 0, is the buyer willing to buy the car? Hint: demand is determined by the buyer, and hence it should reflect his uncertainty concerning. If the car is offered for sale at the price of p 0, 1, its expected quality is p/2, and the payoff to the buyer from purchasing it is therefore 3 p p p 4 4. The buyer is therefore willing to buy the car only at the price of 0, choosing instead the payoff of 0 associated with not owning a car. At the price of p 0, he s indifferent between "buying" and "not buying" the car. c. (5) For each does there exist a price p at which supply equals demand? i.e., What are the marke-clearing prices for each? Hint: you can use your answers to a. and b. For each, supply demand 0 only at p. d. (5) How does this problem relate to the Akerlof market for lemons? This is a simple illustration of the market for lemons, in the sense that the seller s private information about the quality of his car prevents its sale at any price, even though for each a price exists at which both the buyer and the seller would profit from the transaction. Akerlof did model a market (i.e., multiple traders on each side) as opposed to the bilateral trading situation considered here. This enabled him to avoid the criticism that bargaining may be inefficient because of private information, but markets with a sufficient number of traders are efficient (they aren t in his model). Both types of traders may want cars in his model, and a trader may want more than one car. These are differences in modeling, however, that do not address the main point of inefficiency due to incomplete information.