Replicator Dynamics 1

Similar documents
SI 563 Homework 3 Oct 5, Determine the set of rationalizable strategies for each of the following games. a) X Y X Y Z

A brief introduction to evolutionary game theory

Introduction to Industrial Organization Professor: Caixia Shen Fall 2014 Lecture Note 5 Games and Strategy (Ch. 4)

6.254 : Game Theory with Engineering Applications Lecture 3: Strategic Form Games - Solution Concepts

Introduction to Game Theory Evolution Games Theory: Replicator Dynamics

Evolution & Learning in Games

Math 135: Answers to Practice Problems

Game Theory. Important Instructions

Strategy -1- Strategy

Problem 3 Solutions. l 3 r, 1

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India August 2012

Chapter 10: Mixed strategies Nash equilibria, reaction curves and the equality of payoffs theorem

CUR 412: Game Theory and its Applications, Lecture 9

Continuing game theory: mixed strategy equilibrium (Ch ), optimality (6.9), start on extensive form games (6.10, Sec. C)!

On Replicator Dynamics and Evolutionary Games

Common Knowledge AND Global Games

Lecture 6 Dynamic games with imperfect information

CUR 412: Game Theory and its Applications, Lecture 4

Introduction to game theory LECTURE 2

Prisoner s dilemma with T = 1

CMSC 474, Introduction to Game Theory 16. Behavioral vs. Mixed Strategies

Microeconomics Comprehensive Exam

On the evolution from barter to fiat money

Outline for today. Stat155 Game Theory Lecture 13: General-Sum Games. General-sum games. General-sum games. Dominated pure strategies

Elements of Economic Analysis II Lecture XI: Oligopoly: Cournot and Bertrand Competition

The Ohio State University Department of Economics Second Midterm Examination Answers

CUR 412: Game Theory and its Applications, Lecture 12

UC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer 2016

preferences of the individual players over these possible outcomes, typically measured by a utility or payoff function.

14.54 International Trade Lecture 4: Exchange Economies

S 2,2-1, x c C x r, 1 0,0

During the previous lecture we began thinking about Game Theory. We were thinking in terms of two strategies, A and B.

Introductory Microeconomics

Mixed Strategies. Samuel Alizon and Daniel Cownden February 4, 2009

(a) (5 points) Suppose p = 1. Calculate all the Nash Equilibria of the game. Do/es the equilibrium/a that you have found maximize social utility?

Microeconomics III Final Exam SOLUTIONS 3/17/11. Muhamet Yildiz

Symmetric Game. In animal behaviour a typical realization involves two parents balancing their individual investment in the common

Rationalizable Strategies

CSE 316A: Homework 5

CUR 412: Game Theory and its Applications, Lecture 4

15.053/8 February 28, person 0-sum (or constant sum) game theory

Long run equilibria in an asymmetric oligopoly

An experimental investigation of evolutionary dynamics in the Rock- Paper-Scissors game. Supplementary Information

PAULI MURTO, ANDREY ZHUKOV. If any mistakes or typos are spotted, kindly communicate them to

G5212: Game Theory. Mark Dean. Spring 2017

Economics 101A (Lecture 21) Stefano DellaVigna

Duopoly models Multistage games with observed actions Subgame perfect equilibrium Extensive form of a game Two-stage prisoner s dilemma

When one firm considers changing its price or output level, it must make assumptions about the reactions of its rivals.

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India August 2012

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2017

6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2

Jianfei Shen. School of Economics, The University of New South Wales, Sydney 2052, Australia

Microeconomics of Banking: Lecture 5

ECON 803: MICROECONOMIC THEORY II Arthur J. Robson Fall 2016 Assignment 9 (due in class on November 22)

6.207/14.15: Networks Lecture 9: Introduction to Game Theory 1

Games with Private Information 資訊不透明賽局

MS&E 246: Lecture 5 Efficiency and fairness. Ramesh Johari

Static Games and Cournot. Competition

Economic Management Strategy: Hwrk 1. 1 Simultaneous-Move Game Theory Questions.

CMPSCI 240: Reasoning about Uncertainty

Game Theory. VK Room: M1.30 Last updated: October 22, 2012.

Economics 101A (Lecture 21) Stefano DellaVigna

Games of Incomplete Information ( 資訊不全賽局 ) Games of Incomplete Information

Elements of Economic Analysis II Lecture X: Introduction to Game Theory

Game theory for. Leonardo Badia.

Notes for Section: Week 7

Microeconomic Theory II Preliminary Examination Solutions Exam date: August 7, 2017

Game theory and applications: Lecture 1

Economics 51: Game Theory

CS711: Introduction to Game Theory and Mechanism Design

M.Phil. Game theory: Problem set II. These problems are designed for discussions in the classes of Week 8 of Michaelmas term. 1

Player 2 L R M H a,a 7,1 5,0 T 0,5 5,3 6,6

Lecture 5 Leadership and Reputation

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

CUR 412: Game Theory and its Applications Final Exam Ronaldo Carpio Jan. 13, 2015

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

Introduction to Political Economy Problem Set 3

The Ohio State University Department of Economics Econ 601 Prof. James Peck Extra Practice Problems Answers (for final)

Introduction to Game Theory

Problem Set 3: Suggested Solutions

Games of Incomplete Information

Almost essential MICROECONOMICS

January 26,

Simon Fraser University Fall Econ 302 D200 Final Exam Solution Instructor: Songzi Du Wednesday December 16, 2015, 8:30 11:30 AM

ECON FINANCIAL ECONOMICS

In Class Exercises. Problem 1

Econ 101A Final exam Mo 19 May, 2008.

Test 1. ECON3161, Game Theory. Tuesday, September 25 th

Player 2 H T T -1,1 1, -1

Economics 171: Final Exam

HW Consider the following game:

EC 202. Lecture notes 14 Oligopoly I. George Symeonidis

Now we return to simultaneous-move games. We resolve the issue of non-existence of Nash equilibrium. in pure strategies through intentional mixing.

Quantal Response Equilibrium with Non-Monotone Probabilities: A Dynamic Approach

ECE 586BH: Problem Set 5: Problems and Solutions Multistage games, including repeated games, with observed moves

Homework #2 Psychology 101 Spr 03 Prof Colin Camerer

Dynamic Games. Econ 400. University of Notre Dame. Econ 400 (ND) Dynamic Games 1 / 18

Stochastic Games and Bayesian Games

Answers to Problem Set 4

Noncooperative Market Games in Normal Form

Transcription:

Replicator Dynamics 1

Nash makes sense (arguably) if -Uber-rational -Calculating 2

Such as Auctions 3

Or Oligopolies Image courtesy of afagen on Flickr. CC BY NC-SA Image courtesy of longislandwins on Flickr. CC-BY 4

But why would game theory matter for our puzzles? 5

Norms/rights/morality are not chosen; rather We believe we have rights! We feel angry when uses service but doesn t pay 6

But From where do these feelings/beliefs come? 7

In this lecture, we will introduce replicator dynamics The replicator dynamic is a simple model of evolution and prestige-biased learning in games Today, we will show that replicator leads to Nash 8

We consider a large population, N, of players Each period, a player is randomly matched with another player and they play a two-player game 9

Each player is assigned a strategy. Players cannot choose their strategies 10

We can think of this in a few ways, e.g.: Players are born with their mother s strategy (ignore sexual reproduction) Players imitate others strategies 11

Note: Rationality and consciousness don t enter the picture. 12

Suppose there are two strategies, A and B. We start with: Some number, N A, of players assigned strategy A And some number, N B, of players assigned strategy B 13

We denote the proportion of the population playing strategy A as X A, so: x A = N A /N x B = N B /N 14

The state of the population is given by (x A, x B ) where x A 0, x B 0, and x A + x B = 1. 15

Since players interacts with another randomly chosen player in the population, a player s EXPECTED payoff is determined by the payoff matrix and the proportion of each strategy in the population. 16

For example, consider the coordination game: a > c b < d A B A B a, a b, c c, b d, d And the following starting frequencies: x A =.75 x B =.25 17

Payoff for player who is playing A is f A Since f A depends on x A and x B we write f A (x A, x B ) f A (x A, x B ) = (probability of interacting with A player)*u A (A,A) + (probability of interacting with B player)*u A (A,B) = x A *a + x B *b =.75*a +.25*b 18

We interpret payoff as rate of reproduction (fitness). 19

The average fitness, f, of a population is the weighted average of the two fitness values. f(x A, x B ) = x A *f A (x A, x B ) + x B *f B (x A, x B ) 20

How fast do x A and x B grow? Recall x A = N A / N First, we need to know how fast does N A grows Let N A = dn A /dt Each individual reproduces at a rate f A, and there are N A of them. So: N A = N A * f A (x A, x B ) Next we need to know how fast N grows. By the same logic: N = N * f(x A, x B ) By the quotient rule, and with a little simplification 21

This is the replicator equation: x A = x A * (f A (x A, x B ) f(x A, x B )) Current frequency of strategy Own fitness relative to the average 22

Growth rate of A x A = x A * (f A (x A, x B ) f(x A, x B )) Current frequency of strategy Because that s how many As can reproduce Own fitness relative to the average This is our key property. More successful strategies grow faster 23

x A = x A * (f A (x A, x B ) f(x A, x B )) If: Then: x A > 0: The proportion of As is non-zero f A > f: The fitness of A is above average x A > 0: A will be increasing in the population 24

The steady states are x A = 0 x A = 1 x A such that f A (x A, x B ) = f B (x A, x B ) 25

Recall the payoffs of our (coordination) game: A B A a, a b, c B c, b d, d a > c b < d 26

= asymptotically stable steady states i.e., steady states s.t. the dynamics point toward it f A (x A, x B ) f B (x A, x B ) x A = 0 x A = 1 27

What were the pure Nash equilibria of the coordination game? 28

A B A a, a b, c B c, b d, d 29

0 1 30

And the mixed strategy equilibrium is: x A = (d b) / (d b + a c) 31

0 (d b) / (d b + a c) 1 32

Replicator teaches us: We end up at Nash ( if we end) AND not just any Nash (e.g. not mixed Nash in coordination) 33

Let s generalize this to three strategies: R P S 34

Now N R is the number playing R N P is the number playing P N S is the number playing S 35

Now x R is the proportion playing R x P is the proportion playing P x S is the proportion playing S 36

The state of population is (x R, x S, x P ) where x R 0, x P 0, x S 0, and x R + x S + x P = 1 37

For example, Consider the Rock-Paper-Scissors Game: R R P S 0-1 1 P 1 0-1 S -1 1 0 With starting frequencies: x R =.25 x P =.25 x S =.5 38

Fitness for player playing R is f R f R (x R,x P,x S ) = (probability of interacting with R player)*u R (R,R) + (probability of interacting with P player)*u R (R,P) + (probability of interacting with S player)*u R (R,S) =.25*0 +.25*-1 +.5*1 =.25 39

In general, fitness for players with strategy R is: f R (x R,x P,x S ) = x R *0 + x P *-1 + x S *1 40

The average fitness, f, of the population is: f(x R,x P,x S ) = x R *f R (x R,x P,x S ) + x P *f P (x R,x P,x S ) + x S *f S (x R,x P,x S ) 41

Replicator is still: x R = x R * (f R (x R,x P,x S ) f(x R,x P,x S )) Current frequency of strategy Own fitness relative to the average 42

x P =1 x R =.5, x P =.5 x S =.5, x P =.5 (x R =.33, xp=.33,x S =.33) x R =1 x R =.5, x S =.5 x S =1 43

A B C Image by MIT OpenCourseWare. 44

Notice not asymptotically stable It cycles Will show this in HW 45

R P S R 0-1 2 P 2 0-1 S -1 2 0 46

A B C Image by MIT OpenCourseWare. 47

Note now is asymptotically stable Will solve for Nash and show this is what dynamics look like in HW 48

For further readings, see: Nowak Evolutionary Dynamics Ch. 4 Weibull Evolutionary Game Theory Ch. 3 Some notes: Can be extended to any number of strategies Doesn t always converge, but when does converges to Nash We will later use this to provide evidence that dynamics predict behavior better than Nash 49

MIT OpenCourseWare http://ocw.mit.edu 14.11 Insights from Game Theory into Social Behavior Fall 2013 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.