Introductory Microeconomics

Similar documents
Introduction to Game Theory Lecture Note 5: Repeated Games

G5212: Game Theory. Mark Dean. Spring 2017

Repeated Games. September 3, Definitions: Discounting, Individual Rationality. Finitely Repeated Games. Infinitely Repeated Games

Game Theory. Wolfgang Frimmel. Repeated Games

CHAPTER 14: REPEATED PRISONER S DILEMMA

Economics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Repeated games OR 8 and 9, and FT 5

In reality; some cases of prisoner s dilemma end in cooperation. Game Theory Dr. F. Fatemi Page 219

Lecture 5 Leadership and Reputation

REPEATED GAMES. MICROECONOMICS Principles and Analysis Frank Cowell. Frank Cowell: Repeated Games. Almost essential Game Theory: Dynamic.

Repeated Games with Perfect Monitoring

Iterated Dominance and Nash Equilibrium

Prisoner s dilemma with T = 1

February 23, An Application in Industrial Organization

Repeated Games. EC202 Lectures IX & X. Francesco Nava. January London School of Economics. Nava (LSE) EC202 Lectures IX & X Jan / 16

Early PD experiments

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

Introduction to Game Theory

Stochastic Games and Bayesian Games

Chapter 8. Repeated Games. Strategies and payoffs for games played twice

ECE 586BH: Problem Set 5: Problems and Solutions Multistage games, including repeated games, with observed moves

Warm Up Finitely Repeated Games Infinitely Repeated Games Bayesian Games. Repeated Games

Stochastic Games and Bayesian Games

Infinitely Repeated Games

SI Game Theory, Fall 2008

Duopoly models Multistage games with observed actions Subgame perfect equilibrium Extensive form of a game Two-stage prisoner s dilemma

Regret Minimization and Security Strategies

6.254 : Game Theory with Engineering Applications Lecture 3: Strategic Form Games - Solution Concepts

Introduction to Industrial Organization Professor: Caixia Shen Fall 2014 Lecture Note 5 Games and Strategy (Ch. 4)

G5212: Game Theory. Mark Dean. Spring 2017

Chapter 10: Mixed strategies Nash equilibria, reaction curves and the equality of payoffs theorem

1 Solutions to Homework 4

In the Name of God. Sharif University of Technology. Graduate School of Management and Economics

Introduction to Multi-Agent Programming

CS711 Game Theory and Mechanism Design

The Nash equilibrium of the stage game is (D, R), giving payoffs (0, 0). Consider the trigger strategies:

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

Finitely repeated simultaneous move game.

Comparative Study between Linear and Graphical Methods in Solving Optimization Problems

CUR 412: Game Theory and its Applications Final Exam Ronaldo Carpio Jan. 13, 2015

In the Name of God. Sharif University of Technology. Microeconomics 2. Graduate School of Management and Economics. Dr. S.

Economics 171: Final Exam

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India October 2012

Name. Answers Discussion Final Exam, Econ 171, March, 2012

An introduction on game theory for wireless networking [1]

Exercises Solutions: Game Theory

Outline for Dynamic Games of Complete Information

Outline for today. Stat155 Game Theory Lecture 13: General-Sum Games. General-sum games. General-sum games. Dominated pure strategies

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2017

CS 331: Artificial Intelligence Game Theory I. Prisoner s Dilemma

GAME THEORY. Department of Economics, MIT, Follow Muhamet s slides. We need the following result for future reference.

Microeconomic Theory II Preliminary Examination Solutions

Repeated Games. Econ 400. University of Notre Dame. Econ 400 (ND) Repeated Games 1 / 48

Chapter 2 Strategic Dominance

Not 0,4 2,1. i. Show there is a perfect Bayesian equilibrium where player A chooses to play, player A chooses L, and player B chooses L.

The Ohio State University Department of Economics Econ 601 Prof. James Peck Extra Practice Problems Answers (for final)

Player 2 L R M H a,a 7,1 5,0 T 0,5 5,3 6,6

Answer Key: Problem Set 4

Topics in Contract Theory Lecture 1

Elements of Economic Analysis II Lecture X: Introduction to Game Theory

Lecture 1: Normal Form Games: Refinements and Correlated Equilibrium

Problem 3 Solutions. l 3 r, 1

Repeated games. Felix Munoz-Garcia. Strategy and Game Theory - Washington State University

MA300.2 Game Theory 2005, LSE

Strategies and Nash Equilibrium. A Whirlwind Tour of Game Theory

Microeconomics of Banking: Lecture 5

ECONS 424 STRATEGY AND GAME THEORY MIDTERM EXAM #2 ANSWER KEY

Game theory and applications: Lecture 1

Game theory for. Leonardo Badia.

M.Phil. Game theory: Problem set II. These problems are designed for discussions in the classes of Week 8 of Michaelmas term. 1

Sequential-move games with Nature s moves.

Microeconomic Theory II Preliminary Examination Solutions Exam date: August 7, 2017

MA200.2 Game Theory II, LSE

Answers to Problem Set 4

G5212: Game Theory. Mark Dean. Spring 2017

Notes for Section: Week 4

UC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer 2016

A brief introduction to evolutionary game theory

Advanced Microeconomics

When one firm considers changing its price or output level, it must make assumptions about the reactions of its rivals.

Repeated Games. Debraj Ray, October 2006

Game Theory. Analyzing Games: From Optimality to Equilibrium. Manar Mohaisen Department of EEC Engineering

Microeconomic Theory II Preliminary Examination Solutions Exam date: June 5, 2017

Week 8: Basic concepts in game theory

Game Theory - Lecture #8

Introduction to Game Theory

Microeconomics II. CIDE, MsC Economics. List of Problems

Game Theory: Additional Exercises

Game Theory Fall 2003

Signaling Games. Farhad Ghassemi

S 2,2-1, x c C x r, 1 0,0

Game Theory. VK Room: M1.30 Last updated: October 22, 2012.

Rationalizable Strategies

Applying Risk Theory to Game Theory Tristan Barnett. Abstract

MATH 121 GAME THEORY REVIEW

MATH 4321 Game Theory Solution to Homework Two

Finding Mixed-strategy Nash Equilibria in 2 2 Games ÙÛ

Best-Reply Sets. Jonathan Weinstein Washington University in St. Louis. This version: May 2015

Final Examination December 14, Economics 5010 AF3.0 : Applied Microeconomics. time=2.5 hours

(a) (5 points) Suppose p = 1. Calculate all the Nash Equilibria of the game. Do/es the equilibrium/a that you have found maximize social utility?

Repeated, Stochastic and Bayesian Games

Transcription:

Prof. Wolfram Elsner Faculty of Business Studies and Economics iino Institute of Institutional and Innovation Economics Introductory Microeconomics More Formal Concepts of Game Theory and Evolutionary Game Theory

Readings for this lecture Mandatory reading this time: More Formal Concepts of Game Theory and Evolutionary Game Theory, in: Elsner/Heinrich/Schwardt (2014): The Microeconomics of Complex Economies, Academic Press, pp. 193-226. The lecture and the slides are complements, not substitutes An additional reading list can be found at the companion website 2

Outline Basic games of classical GT was introduced in Chapter 2 Now: Understand the formal structure of classical decision theory and GT Classical GT relies on (boundedly) rational agents Evolutionary GT allows to relax such assumptions and focuses on the dynamic performance of strategies First: Understand most important formal concepts of classical GT Then: Move to evolutionary GT 3

Basic Concepts of Game Theory A strategic game is described by The rules of the game The agents of the game (here: finite number) The strategies of the agents (here: finite number) The information available to the agents Normal form game Agents make decisions simultaneously They do not know about the decision of the others 4

Notation I Symbol s i S i Explanation A pure strategy of the ith agent Set of all pure strategies of agent i S = S i, i = 1,, n Set of strategies of all players s i = s j i j Strategies of all other agents than agent i s = s i, s i Feasible configuration of strategies 5

Notation II Symbol Π i S = Π i s s = Π i s i, s i si s i G = S i ; Π i S ; I i, i = 1,, n G = S 1, S 2 ; Π 1 S, Π 2 S ; I 1, I 2 Explanation Set of payoffs for all possible combinations of strategies General description for a normal form game Description for a normal form game with two players 6

Matrix notation G = S 1, S 2 ; Π 1 S, Π 2 S ; I 1, I 2 Strategy 1 Strategy 2 Strategy 1 Π A (s A1, s B1 ) Π B (s A1, s B1 ) Π A (s A1, s B2 ) Π B (s A1, s B2 ) Strategy 2 Π B (s A2, s B1 ) Π B (s A2, s B2 ) Π A (s A2, s B1 ) Π A (s A2, s B2 ) 7

Assumptions regarding the agents In order to predict the outcome of an interaction, assumptions regarding the agents behavior must be made Utility maximizing agents with well-defined preference orderings, i.e. for any outcomes a and b the following holds: Completeness: Reflexivity: a b or b a or a ~ b a b b a Transitivity: a b b c a c Common Knowledge of Rationality Every agent knows that all agents are rational, that all all other agents also know that all agents are rational, that they also are aware that all agents know that all are rational, etc. 8

A Preliminary Consideration: Non- Interactive Concepts, Decision Theory Consider a non-interactive decision situation (to bring or not to bring an umbrella) under risk (the state of the world is unknown, it may or may not rain). S t a t e of the W o r l d Rain No Rain How will the player decide? P l a y e r Bring Umbrella 4 5 No Umbrella -10 10 The obvious difference to game theory is that the State of the World is unknown and unconstrained by rationality 9

Non-Interactive Concepts, Decision Theory An optimistic concept: Maximax Find the best possible payoff for every strategy and maximize An pessimistic concept: Minimax Find the worst possible payoff for every strategy and maximize S t a t e of the W o r l d S t a t e of the W o r l d P l a y e r U R No R Max. 4 5 5 No U -10 10 10 P l a y e r U R No R Min. 4 5 4 No U -10 10-10 10

Non-Interactive Concepts, Decision Theory contd. An opportunity cost based concept: Savage s Minimax Regret Construct the regret matrix (how much would the player regret this decision in this state of the world compared to the other decision she could have taken), find the highest regret for every strategy and minimize S t a t e of the W o r l d R e g r e t P l a y e r U R No R 4 5 No U -10 10 S t r a t. U R No R Max. 0 5 5 No U 6 0 6 11

Non-Interactive Concepts, Decision Theory contd. There are also parametric decision criteria (i.e. criteria that assign a priori probabilities to the states of the world) including, e.g. the Laplace and the Hurwicz criterion Of course, all decision theory concepts may also be used to make predictions for strategic games (these predictions would be valid even without CKR, i.e. even if agents assume their opponents to be irrational) However, for many problem structures, these concepts fail GT allows for more advanced prediction methods by taking the agents capability to make consistent, systematic, and rational decisions into account 12

How to solve the game Players are rational, i.e. payoff maximizing and neither benevolent nor envious They know that all other players think the same way Expectations about the behavior of the other players can be formed The players then play the strategy giving them the best outcome (highest payoff) given their expectations about the reasoning of the other players 13

Dominance of strategies A strategy s i S i dominates a strategy s i ~ S i iff Π i s i, s i Π i s i ~, s i s i and s i : Π i s i, s i > Π i (s i ~, s i ) The latter strategy s i ~ is said to be dominated by the first If a strategy gives strictly higher payoffs regardless of the choice of the opponent, it is said to strictly dominate the other strategy Rational players never play a strictly dominated strategy 14

SESDS One way to predict the outcome of a game is therefore the successive elimination of strictly dominated strategies (SESDS) SESDS does not require any assumptions about opponent s behavior Strat. 1 Strat. 2 Strat. 1 Strategy 1 4 4 2 2 Strategy 1 4 4 Strategy 2 2 2 0 0 Strategy 2 2 2 15

SESDS SESDS does not always yield a solution: P l a y e r B Strategy 1 Strategy 2 P l a y e r Strategy 1 Strategy 2 2 2 0 0 0 2 There are no strictly dominated strategies to remove A 0 2 16

Nash Equilibria SESDS does not always yield a solution: But: Thanks to CKR, agents can form expectations about the choices made by other agents They can choose the best possible response to the expected choice of their opponents Due to CKR, the others will do so as well The resulting situation is a combination of mutual best responses This situation is called Nash Equilibrium (NE) 17

Nash Equilibria The formal definition is as follows: If a game is solvable via SESDS the solution is also a NE But: Not all NE can be explored via SESDS Also, There are games that do not have a NE in pure strategies at all Therefore: Introduce the distinction between pure and mixed strategies 18

Mixed Strategies P l a y e r B How do you play Rock- Paper-Scissors? Rock Paper Scissors To always play the same strategy is a bad idea Mixed strategies capture the idea of playing different pure strategies with some probability P l a y e r A Rock Paper Scissors 0 1-1 0-1 1-1 0 1 1 0-1 1-1 0-1 1 0 19

Nash Equilibrium and Mixed Strategies A mixed strategy σ i for player i is a vector in which every pure strategy is associated with a probability For the two strategy case: σ i = p 1 p The Nash Equilibrium is defined as a configuration of mixed strategies for the n players such that Π i σ i, σ i Π i σ i, σ i σ i i 20

Nash Equilibrium and Mixed Strategies Every finite n-person normal-form game with a finite number of strategies for each player has at least one NE in pure or mixed strategies (proven by John Nash in 1950) Thanks to the NE and the concept of mixed strategies, all finite n-person normal-form games with a finite number of strategies can be solved in theory How can we compute the NE in mixed strategies? 21

Computation of NE in Mixed Strategies P l a y e r B P l a y e r A Dove Hawk Dove 2 2 1 3 Hawk 3 1 0 0 Define the x y matrix A with the payoffs for the first player: A = 2 1 3 0 Expected payoffs are given by: Π 1 = σ 1 t A σ 2 22

Computation of NE in Mixed Strategies Π 1 = σ 1 t A σ 2 Π 1 = p 1 p 2 1 3 0 Π 1 = 2pq + 3q + p Π 1 p = 2q +1 Π 1 p = 2q + 1 = 0 q = 0.5 p = 0.5 q 1 q This procedure can be employed for every (symmetric) game Always take the first derivative of the expected payoff function with respect to the strategy parameter of the same player (p for 1, q for 2) and solve it for 0 For symmetric games, the equations are identical for both (with p and q exchanged) and thus have to be solved only once 23

Reaction Functions p and q can also be shown as the reaction functions p(q) and q(p) of the players reaction to the opponent s choice Intersections denote NE (the current Hawk-Dove game has 3) Note: The caption in Figure 8.12 in Chapter 8 is incorrect (same Figure as on this slide; the Figure depicts Hawk-Dove, not Matching Pennies) 24

Reaction Functions Mixed strategy NE are where players are indifferent between both pure (and mixed) strategies (i.e. where Π 1 p =0, Π 2 q = 0) Mixed strategy NE may seem oddly unstable but this is not necessarily the case (see Evolutionary Game Theory below). Also, under CKR it is rational to deliberately choose mixed NE strategies in order to facilitate the emergence of the equilibrium and to avoid being exploited (Aumann s defense) This is especially true if there is no pure strategy NE; consider the Matching Pennies game (or the Rock-Paper-Scissors game above) 25

Mixed Strategy NE P l a y e r 2 P Heads Tails l a y e r Heads -1 1 1-1 1 Tails 1-1 -1 1 In zero-sum games (like this Matching Pennies game), unequal payoff denotes one player being exploited by the other 26

Extensions of the Nash Equilibrium Many extensions to the Nash Equilibrium have been developed Some of these, the ones for extensive or repeated (subgame perfect Nash Equilibrium) as well as evolutionary games (ESS, ), will be presented below There are also refinements that ensure that the equilibrium is still valid under stochastic perturbations (the trembling hand ), e.g. Selten s Trembling Hand Perfect Equilibrium, Myerson s Proper Equilibrium (see textbook), or Harsanyi s Bayesian Nash Equilibrium. 27

Extensive Form Games In normal-form games, the players make their decisions simultaneously but this is not the case for other types of games For games in sequential form we use a new notation, the extensive-form notation Note that this is necessary only if the agents have full information about the decisions made by the previous agents Otherwise the game is equivalent to a normal-form game 28

Extensive-Form Notation Player 1 Player 2 Player 2 Result 1 Result 2 Result 3 Result 4 Depending on the choice of the first player, the second player faces a different decision situation It is therefore convenient to define complete strategies for the players A complete strategy gives each player an instruction for all possible situations 29

Complete Strategies Player 1 C V 1 D For an extensive game G E let V be a set containing all possible states of G E Player 2 V 2 V 3 C D C D V 4 V 5 V 6 V 7 V A will contain all possible situations in which player A can possibly make a decision A complete strategy for player A gives a an instruction for any element in V A (1,1) ( 1,2) (2, 1) (0,0) 30

How to solve extensive games Analytical derivation of NE cumbersome Thanks to CKR we can rely on backward induction 31

Backward Induction Player 1 V 1 The reasoning for player 1 is as follows: C D What would player 2 do in situation V 2? Choose D Player 2 V 2 V 3 Result is would be V 5 C D C D What would player 2 do in situation V 3? Choose D Result would be V 7 V 4 V 5 V 6 V 7 (1,1) ( 1,2) (2, 1) (0,0) Since 1 prefers V 7 (via V 3 ) to V 5 (via V 2 ), she chooses D (which leads to V 3 and then V 7 ) 32

How to solve extensive games Analytical derivation of NE cumbersome Thanks to CKR we can rely on backward induction Advantage: All NE found with backward induction are also subgame perfect A NE is subgame perfect if it is also a NE of all the subgames that contain the NE 33

Repeated Games and Supergames Up to this point every game we have investigated was played only once The solutions we have obtained are called one-shot solutions Now we consider repeated games A supergame G is a sequence of repetitions of the normal-form game G that is infinite from the players point of view, i.e. either really infinite or indefinite (with stochastic probability to end in each period) 34

Solving Supergames If G is finite it can be expressed as an extensive game and solved via backward induction If G is infinite or indefinite, there is no last period, so backward induction will not work Still, strategies can be characterized as complete (if they contain instructions for every possible situation) or not complete and equilibria can be characterized as subgame perfect or not If both players follow complete strategies, all choices are predictable and the payoffs are thus known in advance (to rational players and observers); this allows to calculate a present value to be used in deciding between strategies in advance 35

Present Values To obtain the present value for a future payoff a one uses a discount parameter δ (with 0 δ 1) which denotes the player s valuation of future payoffs (the present value in time 0 of a payoff a in time 0 is a, of the same payoff in time 1 it is δa, in time 2, δ 2 a, etc.) The present value for an infinite sequence of payoff a is therefore given as Π = a + δa + δ 2 a + δ 3 a +... = a Agents make the decisions about their strategy plan based on the present values of the expected payoffs from the different strategy plans 1 δ 36

Strategy Plans Strategy plans can be very simple, e.g. always to play one strategy In the PD, ALL-D is a strategy plan according to which the player always defects Strategy plans can also be more complex, esp. when the choice of the player depends on the decisions made by other players in the past In supergames, the players usually have a memory that allows expectation formation (which is known to the player so that the strategy plan can make use of the fact) 37

Folk Theorem and Trigger Strategies Consider a supergame based on a social dilemma (a generalized PD), i.e. a cooperative solution exists which constitutes the social optimum and a Pareto optimum of the underlying game but the cooperative strategy is exploitable (i.e. the cooperative solution is not a NE) The folk theorem states that it is possible to reach the cooperative solution in the supergame This is achieved by using trigger strategies, positing a credible thread of punishment if the opponent should deviate from the cooperative strategy (Rubinstein s proof). 38

Trigger Strategies and Rubinstein s Proof of the Folk Theorem Define a strategy that guarantees a minimum of payoff for the opponent as the minmax strategy (analogous to maximin, only that her payoff is minimized by her opponents choice not maximized by her own choice) If the opponent s payoff resulting from using the minmax strategy on her is less then her payoff from cooperation, the minmax strategy can serve as a threat of punishment for deviating from cooperation The threat is said to be credible if the punishing player receives a higher or equal payoff from playing the minmax strategy than from allowing herself to be exploited (more generally: if the minmax strategy is not strictly dominated) G Credible threats can be used to construct a trigger strategy s trigger 39

A strategy profile can include the instruction to employ s minmax always if a specific expectation is not met By doing so, mutually beneficial agreements that do not constitute a NE in the one-shot case can be enforced Example: The Prisoner s Dilemma and tit-for-tat G s trigger Trigger Strategies and Rubinstein s Proof of the Folk Theorem G = s TFT = cooperate, if the other player has cooperated last round defect, if the other player has not cooperated last round To defect is the minmax strategy that serves as credible threat, all players playing this trigger strategy is a NE in the repeated PD Trigger strategies can be constructed in different ways; they could for instance also less forgivingly punish forever after one defection 40

Evolutionary Game Theory Allows to conveniently construct microfounded economic models, thus closing gaps between micro, meso, and macro level In Evolutionary Game Theory (EGT) we consider populations of agents, all continuously playing one and the same strategy according to their type Agents are matched randomly to play again and again the same underlying game (which must be symmetric so that the positions of row player and column player are exchangable) The population shares of these types (strategies) develop according to their performance in those games 41

Evolutionary Game Theory The changing share of types may be seen as reproduction with the offspring continuing the same strategies or as poorly performing agents consciously changing their strategy (type) The most important solution techniques are 1. Analysis of evolutionary stability (evolutionary stable strategies, ESS) 2. Replicator dynamics 3. Simulation (see textbook Chapter 9) The players are matched randomly When they meet, they play the underlying game using their predetermined strategy Composition of the population changes according to the agent s performance in playing the underlying game 42

Evolutionary Stability NE: Combination of mutual best answers In the population context we consider only strategies: A strategy is evolutionary stable if a population dominated by it is not invadable by any other strategy If a population is dominated by an ESS, the situation will remain stable What does not invadable mean? 43

The principle of evolutionary stability Symbol P GV G GV A Π σ1 /σ 2 = σ T 1 A σ 2 Explanation An evolutionary population setting The underlying (symmetric) one-shot normal form game in this setting The payoff matrix Expected payoff of the first strategy against the second 44

The principle of evolutionary stability Consider a population of agents playing σ Now consider a very small group of players entering the group and playing σ ~ σ If the new strategy yields better payoffs than the old one, the share of players playing σ ~ will increase In this case the σ ~ has invaded σ and σ cannot be said to be an ESS We will now formalize the concept of evolutionary stability 45

How to test for evolutionary stability Let ε be the (arbitrarily small) share of the invading group playing σ ~ A share of (1 ε) is therefore playing strategy σ σ is an ESS if it yields a higher expected payoff than σ ~ Formally: σ T A(1 ε)σ + σ T Aεσ ~ > σ ~T A(1 ε)σ + σ ~T Aεσ ~ This rather complicated formula can conveniently be tested using two simple conditions, The first is constructed from letting ε=0 for which the above inequality must hold at least weakly, with (1 st condition) The second results additionally for the case that the first condition holds with equality (2 nd condition). 46

How to test for evolutionary stability (Weak) First condition of evolutionary stability: σ T Aσ σ ~T Aσ Does not hold σ is not evolutionary stable Holds Strict first condition of evolutionary stability: Does not hold σ T Aσ > σ ~T Aσ σ T Aσ ~ > σ ~T Aσ ~ Holds Second condition of evolutionary stability Holds σ is evolutionary stable 47

How to test for evolutionary stability Test considering which candidate strategies: pure and mixed NE strategies, since non-ne-strategies are by definition not best answers to themselves (and this is required by the 1 st condition of ESS). Test against which invading strategies: Against all competing pure strategies (in the 2-strategy case since mixed strategies are then only linear combinations combining properties of only two pure strategies) In cases with more than 2 strategies, evolutionary stability must be tested against mixed strategies as well. 48

Evolutionary Stability: Example Consider as an example the Hawk Dove game above. We have game matrix A = 2 1 3 0 H = 0 1, D = 1 0.5, and M = 0 0.5 and three NE strategies, Test H against D, 1 st condition: 0 1 2 1 0 3 0 1 1 0 2 1 3 0 0 1 0 1 The condition does not hold, H is therefore not an ESS because it can be invaded by D-players 49

Evolutionary Stability: Example contd. Test D against H, 1 st condition: 1 0 2 1 1 3 0 0 0 1 2 1 3 0 2 3 1 0 The condition does not hold, D is therefore not an ESS because it can be invaded by H-players M must be tested against both D and H; Test M against H, 1 st condition: 0.5 0.5 2 1 0.5 3 0 0.5 0 1 2 1 0.5 3 0 0.5 1.5 1.5 Condition holds with equality, test of 2 nd condition necessary 50

Evolutionary Stability: Example contd. Test M against H, 2 nd condition: 0.5 0.5 2 1 0 3 0 1 > 0 1 2 1 3 0 0.5 > 0 0 1 Condition holds, H cannot invade M Test M against D, 1 st condition: 0.5 0.5 2 1 0.5 3 0 0.5 1 0 2 1 3 0 1.5 1.5 0.5 0.5 Condition holds with equality, test of 2 nd condition necessary 51

Evolutionary Stability: Example contd. Test M against D, 2 nd condition: 0.5 0.5 2 1 1 3 0 0 > 1 0 2 1 3 0 2.5 > 2 1 0 Condition holds, D cannot invade M Since all mixed strategies are linear combinations of D and H, none of those (except M itself) can be able to invade M M is therefore this game s only ESS 52

Replicator Dynamics The focus is not on single strategies but on stable compositions of populations The population is modeled as a dynamical system A dynamical system describes how state variables change over time Consider a system with z state variables: θ t = = θ 1,t θ 2,t θ i,t i=1,,z θ z,t The development path depends on the initial values and the development equations 53

Replicator Dynamics Here the state variables represent the shares of specific agents playing a certain strategy (except for the last share which is the difference to 100%, thus determined by the others) The shares might change according to a difference or differential equation: θ t+1 = F D ~ (θ t ) dθ(t) dt = F d ~ (θ t ) 54

Replicator Dynamics Symbol f i,t Explanation The evolutionary fitness of agent i at time t φ = i θ i,t f i,t Average fitness in the population For the evolutionary performance of the agents it is important how their fitness compares to the average fitness θ i,t+1 = F D ~ (θ i,t, f i,t, φ t ) dθ i (t) One can now analyze the dynamical system and search for stable equilibria dt = F d ~ (θ i t, f i t, φ i (t)) 55

Replicator Dynamics Replicator Dynamics requires an explicit assumption on the form and speed of the replicator (this is implicit in ESS) Typical forms set the dynamic as proportional to the relation of individual and average fitness with the most common canonical forms being θ i,t+1 = θ i,t (f i,t /φ t ) dθ i (t) dt = θ i t (f i t φ(t)) 56

How to determine (stable) equilibria The conditions for equilibria are: θ i,t+1 = F D ~ θ it, f i,t, φ t = θ t dθ i (t) = F ~ dt d θ i t, f i t, φ i t = 0 In order to test for the stability of the equilibria one calculates the eigenvalues of the development equation for the equilibria (see Chapters 10 and 11 of the textbook for details) For difference equations, the equilibrium is stable if all eigenvalues have an absolute value smaller than unity For differential equations, the equilibrium is stable if all eigenvalues are negative 57

Replicator Dynamics: Example Consider again the Hawk-Dove game above with the canonical replicator function dθ i(t) = θ dt i t (f i t φ(t)), call the pure strategies x i and assume the f i = Π i = x T i A θ (at any point in time, leaving out the (t) for convenience) We further have A = 2 1 3 0, x 1 = D = 1 0, x 2 = H = 0 1, and therefore also f 1 = 1 0 2 1 3 0 f 2 = 0 1 2 1 3 0 θ 1 = 1 + θ 1 θ 1 1 θ 1 = 3θ 1 θ 1 1 58

Replicator Dynamics: Example contd. Since φ = i θ i f i =θ 1 f 1 +(1 θ 1 )f 2, we can rearrange the replicator equation dθ 1 dt = θ 1 (f 1 φ) dθ 1 dt = θ 1 (f 1 θ 1 f 1 (1 θ 1 )f 2 )=θ 1 ((1 θ 1 )f 1 (1 θ 1 )f 2 ) dθ 1 dt = θ 1 (1 θ 1 )(f 1 f 2 ) Substituting f 1 and f 2 yields dθ 1 dt = θ 1 1 θ 1 1 + θ 1 3θ 1 = θ 1 1 θ 1 1 2θ 1 59

Replicator Dynamics: Example contd. Applying the equilibrium condition dθ 1 points θ 1,1 = 0, θ 1,2 = 1, and θ 1,3 = 0.5 dt = 0 yields three fixed To assess the stability, we bring the replicator equation into polynomial form dθ 1 dt = 2θ 1 3 3θ 1 2 + θ 1 and obtain the only element of the system s Jacobian (dθ i /dt) θ i = 6θ 1 2 6θ 1 + 1 the linearization of which is the (dominant) eigenvalue. 60

Replicator Dynamics: Example contd. We obtain the three eigenvalues for the three equilibria λ θ 1,1 λ θ 1,2 λ θ 1,3 = 1 (i.e. θ 1,1 is unstable), = 1 (i.e. θ 1,2 is also unstable), = 0.5 (i.e. θ 1,3 is stable) The result is identical to that obtained from analysis of ESS: a composition of the population (or equivalently of each individual strategy) of both strategies with equal shares is the only stable equilibrium or fixed point. The pure strategy equilibria / fixed points exist but are unstable. 61

Summary This chapter gave a comprehensive introduction to formal game theory including different notations (matrix, formal, extensive) introduced advanced solution concepts of decision theory and game theory (including SESDS and Nash Equilibria in pure and mixed strategies, backward induction, ) covered non-normal-form games (including extensive games, repeated games) gave a formal introduction to Evolutionary Game Theory (ESS and replicator dynamics) 62

Readings for the next lecture Compulsory reading: Introduction to Simulation and Agent-Based Modeling, in: Elsner/Heinrich/Schwardt: Microeconomics of Complex Economies, pp. 227-247. For further readings visit the companion website 63