Microeconomics of Banking: Lecture 5

Similar documents
Lecture 6 Dynamic games with imperfect information

CUR 412: Game Theory and its Applications, Lecture 9

CUR 412: Game Theory and its Applications Final Exam Ronaldo Carpio Jan. 13, 2015

CUR 412: Game Theory and its Applications, Lecture 12

Introduction to Multi-Agent Programming

Introduction to Industrial Organization Professor: Caixia Shen Fall 2014 Lecture Note 5 Games and Strategy (Ch. 4)

Problem 3 Solutions. l 3 r, 1

G5212: Game Theory. Mark Dean. Spring 2017

Duopoly models Multistage games with observed actions Subgame perfect equilibrium Extensive form of a game Two-stage prisoner s dilemma

Econ 711 Homework 1 Solutions

1 Solutions to Homework 3

CMSC 474, Introduction to Game Theory 16. Behavioral vs. Mixed Strategies

Answers to Problem Set 4

GAME THEORY: DYNAMIC. MICROECONOMICS Principles and Analysis Frank Cowell. Frank Cowell: Dynamic Game Theory

UC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer 2016

Economics 171: Final Exam

CUR 412: Game Theory and its Applications, Lecture 11

CUR 412: Game Theory and its Applications, Lecture 4

Finitely repeated simultaneous move game.

CHAPTER 15 Sequential rationality 1-1

CUR 412: Game Theory and its Applications, Lecture 4

In reality; some cases of prisoner s dilemma end in cooperation. Game Theory Dr. F. Fatemi Page 219

Notes for Section: Week 4

Economics 109 Practice Problems 1, Vincent Crawford, Spring 2002

Game Theory. Wolfgang Frimmel. Repeated Games

Prisoner s dilemma with T = 1

Game Theory and Economics Prof. Dr. Debarshi Das Department of Humanities and Social Sciences Indian Institute of Technology, Guwahati.

Game Theory. Important Instructions

CHAPTER 14: REPEATED PRISONER S DILEMMA

Advanced Micro 1 Lecture 14: Dynamic Games Equilibrium Concepts

The Ohio State University Department of Economics Second Midterm Examination Answers

Stochastic Games and Bayesian Games

Exercises Solutions: Game Theory

1 x i c i if x 1 +x 2 > 0 u i (x 1,x 2 ) = 0 if x 1 +x 2 = 0

Not 0,4 2,1. i. Show there is a perfect Bayesian equilibrium where player A chooses to play, player A chooses L, and player B chooses L.

Introduction to Game Theory Lecture Note 5: Repeated Games

Microeconomics II. CIDE, MsC Economics. List of Problems

An introduction on game theory for wireless networking [1]

Sequential-move games with Nature s moves.

Iterated Dominance and Nash Equilibrium

Sequential Rationality and Weak Perfect Bayesian Equilibrium

PRISONER S DILEMMA. Example from P-R p. 455; also 476-7, Price-setting (Bertrand) duopoly Demand functions

Stochastic Games and Bayesian Games

Economics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Repeated games OR 8 and 9, and FT 5

Economic Management Strategy: Hwrk 1. 1 Simultaneous-Move Game Theory Questions.

Extensive-Form Games with Imperfect Information

Exercises Solutions: Oligopoly

M.Phil. Game theory: Problem set II. These problems are designed for discussions in the classes of Week 8 of Michaelmas term. 1

6.254 : Game Theory with Engineering Applications Lecture 3: Strategic Form Games - Solution Concepts

1 R. 2 l r 1 1 l2 r 2

G5212: Game Theory. Mark Dean. Spring 2017

(a) (5 points) Suppose p = 1. Calculate all the Nash Equilibria of the game. Do/es the equilibrium/a that you have found maximize social utility?

Player 2 L R M H a,a 7,1 5,0 T 0,5 5,3 6,6

Simon Fraser University Fall Econ 302 D200 Final Exam Solution Instructor: Songzi Du Wednesday December 16, 2015, 8:30 11:30 AM

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

ECE 586BH: Problem Set 5: Problems and Solutions Multistage games, including repeated games, with observed moves

February 23, An Application in Industrial Organization

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India October 2012

Agenda. Game Theory Matrix Form of a Game Dominant Strategy and Dominated Strategy Nash Equilibrium Game Trees Subgame Perfection

Chapter 8. Repeated Games. Strategies and payoffs for games played twice

Introduction to Game Theory

SI 563 Homework 3 Oct 5, Determine the set of rationalizable strategies for each of the following games. a) X Y X Y Z

Continuing game theory: mixed strategy equilibrium (Ch ), optimality (6.9), start on extensive form games (6.10, Sec. C)!

Economics 335 March 2, 1999 Notes 6: Game Theory

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

1 Solutions to Homework 4

Economics 502 April 3, 2008

CS 798: Homework Assignment 4 (Game Theory)

Economics 51: Game Theory

Introduction to Game Theory

13.1 Infinitely Repeated Cournot Oligopoly

Microeconomic Theory II Preliminary Examination Solutions Exam date: June 5, 2017

Mohammad Hossein Manshaei 1394

MA200.2 Game Theory II, LSE

REPEATED GAMES. MICROECONOMICS Principles and Analysis Frank Cowell. Frank Cowell: Repeated Games. Almost essential Game Theory: Dynamic.

Name. Answers Discussion Final Exam, Econ 171, March, 2012

Game Theory Week 7, Lecture 7

SF2972 GAME THEORY Infinite games

In the Name of God. Sharif University of Technology. Microeconomics 2. Graduate School of Management and Economics. Dr. S.

Copyright 2008, Yan Chen

Appendix: Common Currencies vs. Monetary Independence

LECTURE NOTES ON GAME THEORY. Player 2 Cooperate Defect Cooperate (10,10) (-1,11) Defect (11,-1) (0,0)

Game Theory with Applications to Finance and Marketing, I

Warm Up Finitely Repeated Games Infinitely Repeated Games Bayesian Games. Repeated Games

Econ 101A Final Exam We May 9, 2012.

University of Hong Kong ECON6036 Stephen Chiu. Extensive Games with Perfect Information II. Outline

Early PD experiments

Answers to Odd-Numbered Problems, 4th Edition of Games and Information, Rasmusen

m 11 m 12 Non-Zero Sum Games Matrix Form of Zero-Sum Games R&N Section 17.6

Subgame Perfect Cooperation in an Extensive Game

ECON402: Practice Final Exam Solutions

preferences of the individual players over these possible outcomes, typically measured by a utility or payoff function.

Solution to Tutorial 1

Topics in Contract Theory Lecture 1

Solution to Tutorial /2013 Semester I MA4264 Game Theory

UC Berkeley Haas School of Business Economic Analysis for Business Decisions (EWMBA 201A) Fall 2012

Preliminary Notions in Game Theory

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2015

Beliefs and Sequential Rationality

Elements of Economic Analysis II Lecture X: Introduction to Game Theory

Transcription:

Microeconomics of Banking: Lecture 5 Prof. Ronaldo CARPIO Oct. 23, 2015

Administrative Stuff Homework 2 is due next week. Due to the change in material covered, I have decided to change the grading system for the course. We will have an open-book midterm and final exam. Homework: 15 %, Midterm: 35 %, Final: 50 % The midterm exam will be on Nov. 6, and will cover the general equilibrium model of asset pricing and game theory. There will be a true-false section, and 3 problems similar to the ones on the homeworks.

Review of Last Week A game is a model of a strategic situation, in which there are many decision-makers that can affect each other. We formulate a strategic game as having three components: The players For each player, a set of actions For each player, preferences over all possible outcomes. An outcome is determined by the actions chosen by all players.

Review of Last Week Game theory is the analysis of strategic situations. We want to have some way of predicting the outcome (i.e. the choices of all players) of a situation. Complete prediction is difficult, so we can try an easier task: find a steady state. A Nash equilibrium (NE) is a steady state, under the assumption that all players choose their actions unilaterally (i.e. acting alone). In a NE, no player has an incentive to deviate (i.e. change his action). Note that this doesn t say anything about how players learn to find NE or which NE (if there are many) is chosen.

Prisoner s Dilemma Player 1 Player 2 Q F Q 2,2 0,3 F 3,0 1,1 Each player has 2 actions: Q and F. Each cell shows the payoffs to the players if the corresponding action is chosen. (F, F ) is the unique Nash equilibrium.

Best Response Functions Suppose that the players other than Player i play the action list a i. Let B i (a i ) be the set of Player i s best (i.e. payoff - maximizing) actions, given that the other players play a i. (There may be more than one). B i is called the best response function of Player i. B i is a set-valued function, that is, it may give a result with more than one element. Every member of B i (a i ) is a best response of Player i to a i.

Using Best Response Functions to find Nash Eq. Proposition: The action profile a is a Nash equilibrium if and only if every player s action is a best response to the other players actions: a i B i (a i) for every player i (1) If the best-response function is single-valued: Let bi (ai ) be the single member of B i(a i ), i.e. B i (a i ) = {b i(ai )}. Then condition 1 is equivalent to: a i = b i (a i) for every player i (2) If the best-response function is single-valued and there are 2 players, condition 1 is equivalent to: a 1 = b 1 (a 2) a 2 = b 2 (a 1)

Prisoner s Dilemma Q F Q 2,2 0,3 F 3,0 1,1 B i (Q) = {F } for i = 1, 2 B i (F ) = {F } for i = 1, 2

BoS Bach Stravinsky Bach 2, 1 0, 0 Stravinsky 0, 0 1, 2 B i (Bach) = {Bach} for i = 1, 2 B i (Stravinsky) = {Stravinsky} for i = 1, 2

Matching Pennies Head Tail Head 1,-1-1,1 Tail -1,1 1,-1 B 1 (Head) = {Head} B 2 (Head) = {Tail} B 1 (Tail) = {Tail} B 2 (Tail) = {Head}

L M R T 1,1 1,0 0,1 B 1,0 0,1 1,0 B 1 (L) = {T, B} B 1 (M) = {T } B 1 (R) = {B} B 2 (T ) = {L, R} B 2 (B) = {M}

Finding Nash equilibrium with Best-Response functions We can use this to find Nash equilibria when the action space is continuous. Step 1: Calculate the best-response functions. Step 2: Find an action profile a that satisfies: a i B i (a i) for every player i Or, if every player s best-response function is single-valued, find a solution of the n equations (n is the number of players): a i = b i (a i) for every player i

Example: synergistic relationship (37.2 in book) Two individuals. Each decides how much effort to devote to relationship. Amount of effort a i is a non-negative real number (so the action space is infinite) Payoff to Player i: u i (a i ) = a i (c + a j a i ), where c > 0 is a constant.

Finding the Nash Equilibrium Construct players best-response functions: Player i s payoff function: u i (a i ) = a i (c + a j a i ) Given a j, this becomes a quadratic: u i (a i ) = a i c + a i a j a 2 i Best response to a j is when this quadratic is maximized. Take the derivative and set to 0. c + a j 2a i = 0 a i = c + a j 2 So, best response functions are: b1 (a 2 ) = c+a2 2 b 2 (a 1 ) = c+a1 2

Finding the Nash Equilibrium The pair (a 1, a 2 ) is a Nash equilibrium if a 1 = b 1 (a 2 ) and a 2 = b 2 (a 1 ). Solving the two equations a 1 = c + a 2 2 a 2 = c + a 1 2 gives a unique solution (c, c). Therefore, this game has a unique Nash equilibrium: a 1 = c, a 2 = c.

Finding the Nash Equilibrium The intersection of b 1 (a 2 ) = c+a2 and b 2 2 (a 1 ) = c+a1 is the Nash 2 equilibrium. Note that using calculus to find the best response requires that the payoffs are concave.

Direct Proof of Nash Equilibrium Sometimes, the only way to find the set of NE is to classify all possible outcomes into cases, and prove that each case is a NE or not. Consider the game we saw last week: guess 2 3 Assume there are 3 players. of the average. Players: 3 people. Action set: player i chooses a number xi [0, 100]. Preferences: The k players whose xi is closest to 2 3 (x 1 + x 2 + x 3 )/3 gets a payoff of 1/k. Everyone else gets a payoff of 0.

Direct Proof of Nash Equilibrium Case 1: x 1 = x 2 = x 3 = 0. All players get a payoff of 1/3. 2 3 (x 1 + x 2 + x 3 )/3 = 0 Suppose player i deviates, by choosing x i = y > 0. The new average becomes 2 y 3 3. Player i s distance to the average is y 2 y 3 3 = y (1 2 9 ). The distance of the other players to the average is 2 y, which is 3 3 smaller. Player i s payoff goes from 1/3 to 0, so he has no incentive to deviate. Therefore, this case is a Nash equilibrium.

Direct Proof of Nash Equilibrium Case 2: x 1 = x 2 = x 3 = x > 0. All players get a payoff of 1/3. 2 3 (x 1 + x 2 + x 3 )/3 = 2 3 x Suppose player i switches from x to x/2. The average goes down by x 9 to 5x 9. Player i becomes closest and gets a payoff of 1, and therefore has an incentive to deviate. This case is not a Nash equilibrium.

Direct Proof of Nash Equilibrium Case 3: Any other combination of x 1, x 2, x 3. At least one player is not one of the closest, and gets a payoff of 0. This player can always increase his payoff by changing his number to something closer to the average. Therefore, this case is not a Nash equilibrium.

Extensive Form Games (Chapter 5) So far, we ve been using strategic form (or normal form) games. All players are assumed to move simultaneously. This cannot capture a sequential situation, where one player moves, then another... Or, if one player can get information on the moves of the other players, before making his own move. We will introduce a way of specifying a game that allows this.

Example: An Entry Game Suppose we have a situation where there is an incumbent and a challenger. For example, an industry might have an established dominant firm. A challenger firm is deciding whether it wants to enter this industry and compete with the incumbent. If the challenger enters, the incumbent chooses whether to engage in intense (and possibly costly) competition, or to accept the challenger s entry.

Entry Game There are two players: the incumbent and the challenger. The challenger moves first, has two actions: In and Out. If the challenger chooses In, the incumbent chooses Fight or Acquiesce. Challenger s preference over outcomes: (In, Acquiesce) > (Out) > (In, Fight) Incumbent s preference over outcomes: (Out) > (In, Acquiesce) > (In, Fight) We can represent these preferences with the payoff functions (challenger is u 1 ): u 1 (In, Acquiesce) = 2, u 1 (Out) = 1, u 1 (In, Fight) = 0 u 2 (Out) = 2, u 2 (In, Acquiesce) = 1, u 2 (In, Fight) = 0

Game Tree We can represent this game with a tree diagram. The root node of the tree is the first move in the game (here, by the challenger). Each action at a node corresponds to a branch in the tree. Outcomes are leaf nodes (i.e. there are no more branches). The first number at each outcome is the payoff to the first player (the challenger).

Formal Specification of an Extensive Game Formally, we need to specify all possible sequences of actions, and all possible outcomes. A history is the sequence of actions played from the beginning, up to some point in the game. In the tree, a history is a path from the root to some node in the tree. In the entry game, all possible histories are: (i.e. at the beginning, no actions played yet), (In), (Out), (In, Acquiesce), (In, Fight). A terminal history is a sequence of actions that specifies an outcome, which is what players have preferences over. In the tree, a terminal history is a path from the root to a leaf node (a node with no branches). In the entry game, the terminal histories are: (Out), (In, Acquiesce), (In, Fight). A player function specifies whose turn it is to move, at every non-terminal history (every non-leaf node in the tree).

Formal Specification of an Extensive Game An extensive game is specified by four components: A set of players A set of terminal histories, with the property that no terminal history can be a subsequence of some other terminal history A player function that assigns a player to every non-terminal history For each player, preferences over the set of terminal histories The sequence of moves and the set of actions at each node are implicitly determined by these components. In practice, we will use trees to specify extensive games.

Solutions to Entry Game How can we find the solution to this game? First approach: Each player will imagine what will happen in future nodes, and use that to determine his choice in current nodes. Suppose we re at the node just after the challenger plays In. At this point, the payoff-maximizing choice for the incumbent is Acquiesce, which gives a payoff pair (2,1). So, at the beginning, the challenger might assume playing In gives a payoff pair of (2,1), which gives a higher payoff than Out. This approach is called backwards induction: imagining what will happen at the end, and using that to determine what to do in earlier situations.

Backwards Induction At each move, for each action, a player deduces the actions that all players will rationally take in the future. This gives the outcome that will occur (assuming everyone behaves rationally), and therefore gives the payoff to each current action. However, in some cases, backwards induction doesn t give a clear prediction about what will happen. In this version of the Entry Game, both Acquiesce, Fight give the same payoff to the incumbent. Unclear what to believe at the beginning of the game. Also, games with infinitely long histories (e.g. an infinitely repeating game).

Strategies in Extensive Form Games Another approach is to formulate this as a strategic game, then use the Nash equilibrium solution concept. We need to expand the action sets of the players to take into account the different actions at each node. For each player i, we will specify the action chosen at all of i s nodes, i.e. every history after which it s i s turn to move Definition: A strategy of player i in an extensive game with perfect information is a function that assigns to each history h after which it is i s turn to move, an action in A(h) (the actions available after h).

In this game, Player 1 only moves at the start (i.e. after the empty history ). The actions available are C, D, so Player 1 has two strategies: C, D. Player 2 moves after the history C and also after D. After C, available actions are E, F. After D, available actions are G, H. Player 2 has four strategies: In this case, it s simple enough to write them together. We can refer to these strategies as EG, EH, FG, FH. The first action corresponds to the first history C.

Strategies in Extensive Form Games We can think of a strategy as an action plan or contingency plan: If Player 1 chooses action X, do Y. However, a strategy must specify an action for all histories, even if they do not occur due to previous choices in the strategy. In this example, a strategy for Player 1 must specify an action for the history (C, E), even if it specifies D at the beginning. Think of this as allowing for the possibility of mistakes in execution.

Strategy Profiles & Outcomes As before, a strategy profile is a list of the strategies of all players. Given a strategy profile s, the terminal history that results by executing the actions specified by s is denoted O(s), the outcome of s. For example, in this game, the outcome of the strategy pair (DG, E) is the terminal history D. The outcome of (CH, E) is the terminal history (C, E, H).

Nash Equilibrium Definition The strategy profile s in an extensive game with perfect information is a Nash equilibrium if, for every player i and strategy r i of player i, the outcome O(s ) is at least as good as the outcome O(r i, s i ) generated by any other strategy profile (r i, s i ) in which player i chooses r i: u i (O(s )) u i (O(r i, s i)) for every strategy r i of player i We can construct the strategic form of an extensive game by listing all strategies of all players and finding the outcome.

Strategic Form of Entry Game The strategic form of the Entry Game is: There are two Nash equilibria: (In, Acquiesce) and (Out, Fight). The first NE is the same as the one found with backwards induction. In the second NE, the incumbent chooses Fight. However, if In is taken as given, this is not rational. This is called an incredible threat. If the incumbent could commit to Fight at the beginning of the game, it would be credible.

Subgames The concept of Nash equilibrium ignores the sequential structure of an extensive game. It treats strategies as choices made once and for all at the beginning of the game. However, the equilibria of this method may contain incredible threats. We ll define a notion of equilibrium that excludes incredible situations. Suppose Γ is an extensive form game with perfect information. The subgame following a non-terminal history h, Γ(h), is the game beginning at the point just after h. A proper subgame is a subgame that is not Γ itself.

Subgames This game has two proper subgames:

Subgame Perfect Equilibria A subgame perfect equilibrium is a strategy profile s in which each subgame s strategy profile is also a Nash equilibrium. Each player s strategy must be optimal for all subgames that have him moving at the beginning, not just the entire game. (Out, Fight) is a NE, but is not a subgame perfect equilibrium because in the subgame following In, the strategy Fight is not optimal for the incumbent.

Subgame Perfect Equilibria Every subgame perfect equilibrium is also a Nash equilibrium, but not vice versa. A subgame perfect equilibrium induces a Nash equilibrium in every subgame. In games with finite histories, subgame perfect equilibria are consistent with backwards induction.

Administrative Stuff Homework 2 is due next week. Due to the change in material covered, I have decided to change the grading system for the course. We will have an open-book midterm and final exam. Homework: 15 %, Midterm: 35 %, Final: 50 % The midterm exam will be on Nov. 6, and will cover the general equilibrium model of asset pricing and game theory. There will be a true-false section, and 3 problems similar to the ones on the homeworks.