Regret Minimization and Security Strategies
|
|
- Octavia Williamson
- 5 years ago
- Views:
Transcription
1 Chapter 5 Regret Minimization and Security Strategies Until now we implicitly adopted a view that a Nash equilibrium is a desirable outcome of a strategic game. In this chapter we consider two alternative views that help us to understand reasoning of players who either want to avoid costly mistakes or fear a bad outcome. Both concepts can be rigorously formalized. 5.1 Regret minimization Consider the following game: L R T 100, 100 0, 0 B 0, 0 1, 1 This is an example of a coordination problem, in which there are two satisfactory outcomes (read Nash equilibria), (T, L) and (B, R), of which one is obviously better for both players. In this game no strategy strictly or weakly dominates the other and each strategy is a best response to some other strategy. So using the concepts we introduced so far we cannot explain how come that rational players would end up choosing the Nash equilibrium (T, L). In this section we explain how this choice can be justified using the concept of regret minimization. 34
2 With each finite strategic game G := (S 1,...,S n, p 1,..., p n ) we first associate a regret-recording game G := (S 1,...,S n, r 1,...,r n ) in which each payoff function r i is defined by r i (s i, s i ) := p i (s i, s i ) p i (s i, s i ), where s i is player s i best response to s i. We call then r i (s i, s i ) player s i regret of choosing s i against s i. Note that by definition for all s we have r i (s) 0. For example, for the above game the corresponding regret-recording game is L R T 0, 0 1, 100 B 100, 1 0, 0 Indeed, r 1 (B, L) := p 1 (T, L) p 1 (B, L) = 100, and similarly for the other seven entries. Let now regret i (s i ) := max r i (s i, s i ). So regret i (s i ) is the maximal regret player i can have from choosing s i. We call then any strategy s i for which the function regret i attains the minimum, i.e., one such that regret i (s i ) = min s i S i regret i (s i ), a regret minimization strategy for player i. In other words, s i is a regret minimization strategy for player i if max r i (s i,s i ) = min s i S i max r i (s i, s i ). The following intuition is helpful here. Suppose the opponents of player i are able to perfectly anticipate which strategy player i is about to play (for example by being informed through a third party what strategy player i has just selected and is about to play). Suppose further that they aim at inflicting at player i the maximum damage in the form of maximal regret and that player i is aware of these circumstances. Then to miminize his regret player i should select a regret minimization strategy. We could say that a regret minimization strategy will be chosen by a player who wants to avoid making a costly mistake, where by a mistake we mean a choice of a strategy that is not a best response to the joint strategy of the opponents. 35
3 To clarify this notion let us return to our example of the coordination game. To visualize the outcomes of the functions regret 1 and regret 2 we put the results in an additional row and column: L R regret 1 T 0, 0 1, B 100, 1 0, regret So T is the minimum of regret 1 and L is the minimum of regret 2. Hence (T, L) is the unique pair of regret minimization strategies. This shows that using the concept of regret minimization we succeeded to single out the preferred Nash equilibrium in the considered coordination game. It is important to note that the concept of regret minimization does not allow us to solve all coordination problems. For example, it does not help us in selecting a Nash equilibrium in symmetric situations, for instance in the game L R T 1, 1 0, 0 B 0, 0 1, 1 Indeed, in this case the regret of each strategy is 1, so regret minimization does not allow us to distinguish between the strategies. Analogous considerations hold for the Battle of Sexes game from Chapter 1. Regret minimization is based on different intuitions than strict and weak dominance. As a result these notions are incomparable. In general, only the following limited observation holds. Recall that the notion of a dominant strategy was introduced in Exercise 8 on page 33. Note 14 (Regret Minimization) Consider a finite game. Every dominant strategy is a regret minimization strategy. Proof. Fix a finite game (S 1,...,S n, p 1,...,p n ). Note that each dominant strategy of player i is a best response to each s i S i. So by the definition of the regret-recording game for all s i S i we have r i (s i, s i ) = 0. This shows that s i is a regret minimization strategy for player i, since for all joint strategies s we have r i (s) 0. 36
4 The process of removing strategies that do not achieve regret minimization can be iterated. We call this process the iterated regret minimization. The example of the coordination game we analyzed shows that the process of regret minimization may yield to a loss of some Nash equilibria. In fact, as we shall see in a moment, during this process all Nash equilibria can be lost. On the other hand, as recently suggested by J. Halpern and R. Pass, in some games the iterated regret minimization yields a more intuitive outcome. As an example let us return to the Traveller s Dilemma game considered in Example 1. Example 13 (Traveller s dilemma revisited) Let us first determine in this game the regret minimization strategies for each player. Take a joint strategy s. Case 1. s i = 2. Then player s i regret of choosing s i against s i is 0 if s i = s i and 2 if s i > s i, so it is at most 2. Case 2. s i > 2. If s i < s i, then p i (s) = s i 2, while the best response to s i, namely s i 1, yields the payoff s i + 1. So player s i regret of choosing s i against s i is in this case 3. If s i = s i, then p i (s) = s i, while the best response to s i, namely s i 1, yields the payoff s i + 1. So player s i regret of choosing s i against s i is in this case 1. Finally, if s i > s i, then p i (s) = s i + 2, while the best response to s i, namely s i 1, yields the payoff s i + 1. So player s i regret of choosing s i against s i is in this case s i s i 1. To summarize, we have regret i (s i ) = max(3, max s i s i 1) = max(3, 99 s i ). So the minimal regret is achieved when 99 s i 3, i.e., when the strategy s i is in the interval [96, 100]. Hence removing all strategies that do not achieve regret minimization yields a game in which each player has the strategies in the interval [96, 100]. In particular, we lost in this way the unique Nash equilibrium of this game, (2,2). We now repeat this elimination procedure. To compute the outcome we consider again two, though now different, cases. 37
5 Case 1. s i = 97. The following table then summarizes player s i regret of choosing s i against a strategy s i of player i: strategy best response regret of player i of player i of player i Case 2. s i 97. The following table then summarizes player s i regret of choosing s i, where for each strategy of player i we list a strategy of player i for which player s i regret is maximal: strategy relevant strategy regret of player i of player i of player i So each strategy of player i different from 97 has regret 3, while 97 has regret 2. This means that the second round of elimination of the strategies that do not achieve regret minimization yields a game in which each player has just one strategy, namely 97. Recall again that the unique Nash equilibrium in the Traveller s Dilemma game is (2,2). So the iterated regret minimization yields here a radically different outcome than the analysis based on Nash equilibria. Interestingly, this outcome, (97,97), has been confirmed by empirical studies. We conclude this section by showing that iterated regret minimization is not order independent. To this end consider the following game: 38
6 L R T 2, 1 0, 3 B 0, 2 1, 1 The corresponding regret-recording game, together with the recording of the outcomes of the functions regret 1 and regret 2 is as follows: L R regret 1 T 0, 2 1, 0 1 B 2, 0 0, 1 2 regret This shows that (T, R) is the unique pair of regret minimization strategies in the original game. So by removing from the original game the strategies B and L that do not achieve regret minimization we reduce it to R T 0, 3 On the other hand, if we initially only remove strategy L, then we obtain the game R T 0, 3 B 1, 1 Now the only strategy that does not achieve regret minimization is T. By removing it we obtain the game R B 1, Security strategies Consider the following game: L R T 0, 0 101, 1 B 1, ,
7 This is an extreme form of a Chicken game, sometimes also called a Hawk-Dove game or a Snowdrift game. The game of Chicken models two drivers driving at each other on a narrow road. If neither driver swerves ( chickens ), the result is a crash. The best option for each driver is to stay straight while the other swerves. This yields a situation where each driver, in attempting to realize his the best outcome, risks a crash. The description of this game as a snowdrift game stresses advantages of a cooperation. The game involves two drivers who are trapped on opposite sides of a snowdrift. Each has the option of staying in the car or shoveling snow to clear a path. Letting the other driver do all the work is the best option, but being exploited by shoveling while the other driver sits in the car is still better than doing nothing. Note that this game has two Nash equilibria, (T, R) and (B, L). However, there seems to be no reason in selecting any Nash equilibrium as each Nash equlibrium is grossly unfair to the player who will receive only 1. In contrast, (B, R), which is not a Nash equilibrium, looks like a most reasonable outcome. Each player receives in it a payoff close to the one he receives in the Nash equilibrium of his preference. Also, why should a player risk the payoff 0 in his attempt to secure the payoff 101 that is only a fraction bigger than his payoff 100 in (B, R)? Note that in this game no strategy strictly or weakly dominates the other and each strategy is a best response to some other strategy. So these concepts are useless in analyzing this game. Moreover, the regret minimization for each strategy is 1. So this concept is of no use here either. We now introduce a concept of a security strategy that allows us to single out the joint strategy (B, R) as the most reasonable outcome for both players. Fix a, not necessarily finite, strategic game G := (S 1,..., S n, p 1,...,p n ). Player i, when considering which strategy s i to select, has to take into account which strategies his opponents will choose. A worst case scenario for player i is that, given his choice of s i, his opponents choose a joint strategy for which player s i payoff is the lowest 1. For each strategy s i of player i once this lowest payoff can be identified a strategy can be selected that leads to a minimum damage. 1 We assume here that such s i exists. 40
8 To formalize this concept for each i {1,...,n} we consider the function 2 defined by f i : S i R f i (s i ) := min p i (s i, s i ). We call any strategy s i for which the function f i attains the maximum, i.e., one such that f i (s i ) = max s i S i f i (s i ), a security strategy or a maxminimizer for player i. We denote this maximum, so max s i S i min p i (s i, s i ), by maxmin i and call it the security payoff of player i. In other words, s i is a security strategy for player i if min p i (s i, s i) = maxmin i. Note that f i (s i ) is the minimum payoff player i is guaranteed to secure for himself when he selects strategy s i. In turn, the security payoff maxmin i of player i is the minimum payoff he is guaranteed to secure for himself in general. To achieve at least this payoff he just needs to select any security strategy. The following intuition is helpful here. Suppose the opponents of player i are able to perfectly anticipate which strategy player i is about to play. Suppose further that they aim at inflicting at player i the maximum damage (in the form of the lowest payoff) and that player i is aware of these circumstances. Then player i should select a strategy that causes the minimum damage for him. Such a strategy is exactly a security strategy and it guarantees him at least the maxmin i payoff. We could say that a security strategy will be chosen by a pessimist player, i.e., one who fears the worst outcome for himself. To clarify this notion let us return to our example of the chicken game. Clearly, both B and R are the only security strategies in this game. Indeed, we have f 1 (T) = f 2 (L) = 0 and f 1 (B) = f 2 (R) = 1. So we succeeded to 2 In what follows we assume that all considered minima and maxima always exist. This assumption is obviously satisfied in finite games. In a later chapter we shall discuss a natural class of infinite games for which this assumption is satisfied, as well. 41
9 single out in this game the outcome (B, R) using the concept of a security strategy. The following counterpart of the Regret Minimization Note 14 holds. Note 15 (Security) Consider a finite game. Every dominant strategy is a security strategy. Proof. Fix a game (S 1,..., S n, p 1,...,p n ) and suppose that s i strategy of player i. For all joint strategies s is a dominant so for all strategies s i of player i p i (s i,s i ) p i (s i, s i ), min p i (s i, s i) min p i (s i, s i ). Hence This concludes the proof. min p i (s i, s i) max min p i (s i, s i ). s i S i Next, we introduce a dual notion to the security payoff maxmin i. It is not needed for the analysis of security strategies but it will turn out to be relevant in a later chapter. With each i {1,..., n} we consider the function defined by F i : S i R F i (s i ) := max s i S i p i (s i, s i ). Then we denote the value min s i S i F i (s i ), i.e., min max p i (s i, s i ), s i S i by minmax i. The following intuition is helpful here. Suppose that now player i is able to perfectly anticipate which strategies his opponents are about to play. Using this information player i can compute the minimum payoff he is guaranteed to achieve in such circumstances: it is minmax i. This lowest payoff for player 42
10 i can be enforced by his opponents if they choose any joint strategy s i for which the function F i attains the minimum, i.e., one such that F i (s i ) = min s i S i F i (s i ). To clarify the notions of maxmin i and minmax i consider an example. Example 14 Consider the following two-player game: L M R T 3, 4, 5, B 6, 2, 1, where we omit the payoffs of the second, i.e., column, player. To visualize the outcomes of the functions f 1 and F 1 we put the results in an additional row and column: L M R f 1 T 3, 4, 5, 3 B 6, 2, 1, 1 F That is, in the f 1 column we list for each row its minimum and in the F 1 row we list for each column its maximum. Since f 1 (T) = 3 and f 1 (B) = 1 we conclude that maxmin 1 = 3. So the security payoff of the row player is 3 and T is a unique security strategy of the row player. In other words, the row player can secure for himself at least the payment 3 and achieves this by choosing strategy T. Next, since F 1 (L) = 6, F 1 (M) = 4 and F 1 (R) = 5 we get minmax 1 = 4. In other words, if the row player knows which strategy the column player is to play, he can secure for himself at least the payment 4. Indeed, if the row player knows that the column player is to play L, then he should play B (and secure the payoff 6), if the row player knows that the column player is to play M, then he should play T (and secure the payoff 4), if the row player knows that the column player is to play R, then he should play T (and secure the payoff 5). 43
11 In the above example maxmin 1 < minmax 1. In general the following observation holds. From now on, to simplify the notation we assume that s i and s i range over, respectively, S i and S i. Lemma 16 (Lower Bound) (i) For all i {1,..., n} we have maxmin i minmax i. (ii) If s is a Nash equilibrium of G, then for all i {1,..., n} we have minmax i p i (s). Item (i) formalizes the intuition that one can take a better decision when more information is available (in this case about which strategies the opponents are about to play). Item (ii) provides a lower bound on the payoff in each Nash equilibrium, which explains the name of the lemma. Proof. (i) Fix i. Let s i be such that min s i p i (s i, s i) = maxmin i and s i such that max si p i (s i, s i) = minmax i. We have then the following string of equalities and inequalities: maxmin i = min s i p i (s i, s i) p i (s i, s i ) max s i p i (s i, s i ) = minmax i. (ii) Fix i. For each Nash equilibrium (s i, s i ) of G we have min s i max si p i (s i, s i ) max si p i (s i, s i) = p i (s i, s i). To clarify the difference between the regret minimization and security strategies consider the following variant of a coordination game: L R T 100, 100 0, 0 B 1, 1 2, 2 It is easy to check that players who select the regret minimization strategies will choose the strategies T and L which yields the payoff 100 to each of them. In contrast, players who select the security strategies will choose B and L and will receive only 1 each. Next, consider the following game: 44
12 L M R T 5, 5 0, 0 97, 1 B 1, 0 1, 0 100, 100 Here the security strategies are B and R and their choice by the players yields the payoff 100 to each of them. In contrast, the regret minimization strategies are T (with the regret 3) and R (with the regret 4) and their choice by the players yields them the respective payoffs 97 and 1. So the outcomes of selecting regret minimization strategies and of security strategies are incomparable. Finally, note that in general there is no relation between the equalities of the maxmin i = minmax i and an existence of a Nash equilibrium. To see this let us fill in the game considered in Example 14 the payoffs for the column player as follows: L M R T 3, 1 4, 0 5, 1 B 6, 1 2, 0 1, 1 We already noted that maxmin 1 < minmax 1 holds here. However, this game has two Nash equilibria, (T, R) and (B, L). Further, the following game L M R T 3, 1 3, 0 5, 0 B 6, 0 2, 1 1, 0 has no Nash equilibrium and yet for i = 1, 2 we have maxmin i = minmax i. In a later chapter we shall discuss a class of two-player games for which there is a close relation between the existence of a Nash equilibrium and the equalities maxmin i = minmax i. 45
6.254 : Game Theory with Engineering Applications Lecture 3: Strategic Form Games - Solution Concepts
6.254 : Game Theory with Engineering Applications Lecture 3: Strategic Form Games - Solution Concepts Asu Ozdaglar MIT February 9, 2010 1 Introduction Outline Review Examples of Pure Strategy Nash Equilibria
More information10.1 Elimination of strictly dominated strategies
Chapter 10 Elimination by Mixed Strategies The notions of dominance apply in particular to mixed extensions of finite strategic games. But we can also consider dominance of a pure strategy by a mixed strategy.
More informationWeek 8: Basic concepts in game theory
Week 8: Basic concepts in game theory Part 1: Examples of games We introduce here the basic objects involved in game theory. To specify a game ones gives The players. The set of all possible strategies
More informationIntroduction to Multi-Agent Programming
Introduction to Multi-Agent Programming 10. Game Theory Strategic Reasoning and Acting Alexander Kleiner and Bernhard Nebel Strategic Game A strategic game G consists of a finite set N (the set of players)
More informationCS711 Game Theory and Mechanism Design
CS711 Game Theory and Mechanism Design Problem Set 1 August 13, 2018 Que 1. [Easy] William and Henry are participants in a televised game show, seated in separate booths with no possibility of communicating
More informationIn the Name of God. Sharif University of Technology. Graduate School of Management and Economics
In the Name of God Sharif University of Technology Graduate School of Management and Economics Microeconomics (for MBA students) 44111 (1393-94 1 st term) - Group 2 Dr. S. Farshad Fatemi Game Theory Game:
More informationJanuary 26,
January 26, 2015 Exercise 9 7.c.1, 7.d.1, 7.d.2, 8.b.1, 8.b.2, 8.b.3, 8.b.4,8.b.5, 8.d.1, 8.d.2 Example 10 There are two divisions of a firm (1 and 2) that would benefit from a research project conducted
More informationWeek 8: Basic concepts in game theory
Week 8: Basic concepts in game theory Part 1: Examples of games We introduce here the basic objects involved in game theory. To specify a game ones gives The players. The set of all possible strategies
More informationECE 586GT: Problem Set 1: Problems and Solutions Analysis of static games
University of Illinois Fall 2018 ECE 586GT: Problem Set 1: Problems and Solutions Analysis of static games Due: Tuesday, Sept. 11, at beginning of class Reading: Course notes, Sections 1.1-1.4 1. [A random
More informationCS711: Introduction to Game Theory and Mechanism Design
CS711: Introduction to Game Theory and Mechanism Design Teacher: Swaprava Nath Domination, Elimination of Dominated Strategies, Nash Equilibrium Domination Normal form game N, (S i ) i N, (u i ) i N Definition
More informationMA300.2 Game Theory 2005, LSE
MA300.2 Game Theory 2005, LSE Answers to Problem Set 2 [1] (a) This is standard (we have even done it in class). The one-shot Cournot outputs can be computed to be A/3, while the payoff to each firm can
More informationMATH 4321 Game Theory Solution to Homework Two
MATH 321 Game Theory Solution to Homework Two Course Instructor: Prof. Y.K. Kwok 1. (a) Suppose that an iterated dominance equilibrium s is not a Nash equilibrium, then there exists s i of some player
More informationChapter 2 Strategic Dominance
Chapter 2 Strategic Dominance 2.1 Prisoner s Dilemma Let us start with perhaps the most famous example in Game Theory, the Prisoner s Dilemma. 1 This is a two-player normal-form (simultaneous move) game.
More informationGame Theory: Additional Exercises
Game Theory: Additional Exercises Problem 1. Consider the following scenario. Players 1 and 2 compete in an auction for a valuable object, for example a painting. Each player writes a bid in a sealed envelope,
More informationExercises Solutions: Game Theory
Exercises Solutions: Game Theory Exercise. (U, R).. (U, L) and (D, R). 3. (D, R). 4. (U, L) and (D, R). 5. First, eliminate R as it is strictly dominated by M for player. Second, eliminate M as it is strictly
More informationPAULI MURTO, ANDREY ZHUKOV
GAME THEORY SOLUTION SET 1 WINTER 018 PAULI MURTO, ANDREY ZHUKOV Introduction For suggested solution to problem 4, last year s suggested solutions by Tsz-Ning Wong were used who I think used suggested
More informationIntroduction to Game Theory
Introduction to Game Theory What is a Game? A game is a formal representation of a situation in which a number of individuals interact in a setting of strategic interdependence. By that, we mean that each
More informationStrategies and Nash Equilibrium. A Whirlwind Tour of Game Theory
Strategies and Nash Equilibrium A Whirlwind Tour of Game Theory (Mostly from Fudenberg & Tirole) Players choose actions, receive rewards based on their own actions and those of the other players. Example,
More informationGame theory for. Leonardo Badia.
Game theory for information engineering Leonardo Badia leonardo.badia@gmail.com Zero-sum games A special class of games, easier to solve Zero-sum We speak of zero-sum game if u i (s) = -u -i (s). player
More informationFebruary 23, An Application in Industrial Organization
An Application in Industrial Organization February 23, 2015 One form of collusive behavior among firms is to restrict output in order to keep the price of the product high. This is a goal of the OPEC oil
More informationMA200.2 Game Theory II, LSE
MA200.2 Game Theory II, LSE Problem Set 1 These questions will go over basic game-theoretic concepts and some applications. homework is due during class on week 4. This [1] In this problem (see Fudenberg-Tirole
More informationMS&E 246: Lecture 5 Efficiency and fairness. Ramesh Johari
MS&E 246: Lecture 5 Efficiency and fairness Ramesh Johari A digression In this lecture: We will use some of the insights of static game analysis to understand efficiency and fairness. Basic setup N players
More informationGame Theory. Analyzing Games: From Optimality to Equilibrium. Manar Mohaisen Department of EEC Engineering
Game Theory Analyzing Games: From Optimality to Equilibrium Manar Mohaisen Department of EEC Engineering Korea University of Technology and Education (KUT) Content Optimality Best Response Domination Nash
More informationCS 331: Artificial Intelligence Game Theory I. Prisoner s Dilemma
CS 331: Artificial Intelligence Game Theory I 1 Prisoner s Dilemma You and your partner have both been caught red handed near the scene of a burglary. Both of you have been brought to the police station,
More informationSI 563 Homework 3 Oct 5, Determine the set of rationalizable strategies for each of the following games. a) X Y X Y Z
SI 563 Homework 3 Oct 5, 06 Chapter 7 Exercise : ( points) Determine the set of rationalizable strategies for each of the following games. a) U (0,4) (4,0) M (3,3) (3,3) D (4,0) (0,4) X Y U (0,4) (4,0)
More informationLECTURE 4: MULTIAGENT INTERACTIONS
What are Multiagent Systems? LECTURE 4: MULTIAGENT INTERACTIONS Source: An Introduction to MultiAgent Systems Michael Wooldridge 10/4/2005 Multi-Agent_Interactions 2 MultiAgent Systems Thus a multiagent
More informationChapter 10: Mixed strategies Nash equilibria, reaction curves and the equality of payoffs theorem
Chapter 10: Mixed strategies Nash equilibria reaction curves and the equality of payoffs theorem Nash equilibrium: The concept of Nash equilibrium can be extended in a natural manner to the mixed strategies
More informationGame Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India October 2012
Game Theory Lecture Notes By Y. Narahari Department of Computer Science and Automation Indian Institute of Science Bangalore, India October 22 COOPERATIVE GAME THEORY Correlated Strategies and Correlated
More informationSolution to Tutorial 1
Solution to Tutorial 1 011/01 Semester I MA464 Game Theory Tutor: Xiang Sun August 4, 011 1 Review Static means one-shot, or simultaneous-move; Complete information means that the payoff functions are
More informationSolution to Tutorial /2013 Semester I MA4264 Game Theory
Solution to Tutorial 1 01/013 Semester I MA464 Game Theory Tutor: Xiang Sun August 30, 01 1 Review Static means one-shot, or simultaneous-move; Complete information means that the payoff functions are
More informationRepeated Games. September 3, Definitions: Discounting, Individual Rationality. Finitely Repeated Games. Infinitely Repeated Games
Repeated Games Frédéric KOESSLER September 3, 2007 1/ Definitions: Discounting, Individual Rationality Finitely Repeated Games Infinitely Repeated Games Automaton Representation of Strategies The One-Shot
More informationElements of Economic Analysis II Lecture X: Introduction to Game Theory
Elements of Economic Analysis II Lecture X: Introduction to Game Theory Kai Hao Yang 11/14/2017 1 Introduction and Basic Definition of Game So far we have been studying environments where the economic
More informationPAULI MURTO, ANDREY ZHUKOV. If any mistakes or typos are spotted, kindly communicate them to
GAME THEORY PROBLEM SET 1 WINTER 2018 PAULI MURTO, ANDREY ZHUKOV Introduction If any mistakes or typos are spotted, kindly communicate them to andrey.zhukov@aalto.fi. Materials from Osborne and Rubinstein
More informationAn Adaptive Learning Model in Coordination Games
Department of Economics An Adaptive Learning Model in Coordination Games Department of Economics Discussion Paper 13-14 Naoki Funai An Adaptive Learning Model in Coordination Games Naoki Funai June 17,
More informationGame theory and applications: Lecture 1
Game theory and applications: Lecture 1 Adam Szeidl September 20, 2018 Outline for today 1 Some applications of game theory 2 Games in strategic form 3 Dominance 4 Nash equilibrium 1 / 8 1. Some applications
More informationGame Theory: Normal Form Games
Game Theory: Normal Form Games Michael Levet June 23, 2016 1 Introduction Game Theory is a mathematical field that studies how rational agents make decisions in both competitive and cooperative situations.
More informationInfinitely Repeated Games
February 10 Infinitely Repeated Games Recall the following theorem Theorem 72 If a game has a unique Nash equilibrium, then its finite repetition has a unique SPNE. Our intuition, however, is that long-term
More informationBasic Game-Theoretic Concepts. Game in strategic form has following elements. Player set N. (Pure) strategy set for player i, S i.
Basic Game-Theoretic Concepts Game in strategic form has following elements Player set N (Pure) strategy set for player i, S i. Payoff function f i for player i f i : S R, where S is product of S i s.
More informationMath 167: Mathematical Game Theory Instructor: Alpár R. Mészáros
Math 167: Mathematical Game Theory Instructor: Alpár R. Mészáros Midterm #1, February 3, 2017 Name (use a pen): Student ID (use a pen): Signature (use a pen): Rules: Duration of the exam: 50 minutes. By
More informationLecture 5 Leadership and Reputation
Lecture 5 Leadership and Reputation Reputations arise in situations where there is an element of repetition, and also where coordination between players is possible. One definition of leadership is that
More informationIntroduction to Game Theory
Introduction to Game Theory 3a. More on Normal-Form Games Dana Nau University of Maryland Nau: Game Theory 1 More Solution Concepts Last time, we talked about several solution concepts Pareto optimality
More informationAdvanced Microeconomics
Advanced Microeconomics ECON5200 - Fall 2014 Introduction What you have done: - consumers maximize their utility subject to budget constraints and firms maximize their profits given technology and market
More informationTR : Knowledge-Based Rational Decisions
City University of New York (CUNY) CUNY Academic Works Computer Science Technical Reports Graduate Center 2009 TR-2009011: Knowledge-Based Rational Decisions Sergei Artemov Follow this and additional works
More informationTopics in Contract Theory Lecture 1
Leonardo Felli 7 January, 2002 Topics in Contract Theory Lecture 1 Contract Theory has become only recently a subfield of Economics. As the name suggest the main object of the analysis is a contract. Therefore
More informationPreliminary Notions in Game Theory
Chapter 7 Preliminary Notions in Game Theory I assume that you recall the basic solution concepts, namely Nash Equilibrium, Bayesian Nash Equilibrium, Subgame-Perfect Equilibrium, and Perfect Bayesian
More informationG5212: Game Theory. Mark Dean. Spring 2017
G5212: Game Theory Mark Dean Spring 2017 Bargaining We will now apply the concept of SPNE to bargaining A bit of background Bargaining is hugely interesting but complicated to model It turns out that the
More informationWarm Up Finitely Repeated Games Infinitely Repeated Games Bayesian Games. Repeated Games
Repeated Games Warm up: bargaining Suppose you and your Qatz.com partner have a falling-out. You agree set up two meetings to negotiate a way to split the value of your assets, which amount to $1 million
More informationComplexity of Iterated Dominance and a New Definition of Eliminability
Complexity of Iterated Dominance and a New Definition of Eliminability Vincent Conitzer and Tuomas Sandholm Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 {conitzer, sandholm}@cs.cmu.edu
More informationIterated Dominance and Nash Equilibrium
Chapter 11 Iterated Dominance and Nash Equilibrium In the previous chapter we examined simultaneous move games in which each player had a dominant strategy; the Prisoner s Dilemma game was one example.
More informationS 2,2-1, x c C x r, 1 0,0
Problem Set 5 1. There are two players facing each other in the following random prisoners dilemma: S C S, -1, x c C x r, 1 0,0 With probability p, x c = y, and with probability 1 p, x c = 0. With probability
More informationECE 586BH: Problem Set 5: Problems and Solutions Multistage games, including repeated games, with observed moves
University of Illinois Spring 01 ECE 586BH: Problem Set 5: Problems and Solutions Multistage games, including repeated games, with observed moves Due: Reading: Thursday, April 11 at beginning of class
More informationUC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer 2016
UC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer 2016 More on strategic games and extensive games with perfect information Block 2 Jun 11, 2017 Auctions results Histogram of
More information6.896 Topics in Algorithmic Game Theory February 10, Lecture 3
6.896 Topics in Algorithmic Game Theory February 0, 200 Lecture 3 Lecturer: Constantinos Daskalakis Scribe: Pablo Azar, Anthony Kim In the previous lecture we saw that there always exists a Nash equilibrium
More information1 Games in Strategic Form
1 Games in Strategic Form A game in strategic form or normal form is a triple Γ (N,{S i } i N,{u i } i N ) in which N = {1,2,...,n} is a finite set of players, S i is the set of strategies of player i,
More informationYao s Minimax Principle
Complexity of algorithms The complexity of an algorithm is usually measured with respect to the size of the input, where size may for example refer to the length of a binary word describing the input,
More informationOutline for today. Stat155 Game Theory Lecture 13: General-Sum Games. General-sum games. General-sum games. Dominated pure strategies
Outline for today Stat155 Game Theory Lecture 13: General-Sum Games Peter Bartlett October 11, 2016 Two-player general-sum games Definitions: payoff matrices, dominant strategies, safety strategies, Nash
More informationStochastic Games and Bayesian Games
Stochastic Games and Bayesian Games CPSC 532l Lecture 10 Stochastic Games and Bayesian Games CPSC 532l Lecture 10, Slide 1 Lecture Overview 1 Recap 2 Stochastic Games 3 Bayesian Games 4 Analyzing Bayesian
More informationBest-Reply Sets. Jonathan Weinstein Washington University in St. Louis. This version: May 2015
Best-Reply Sets Jonathan Weinstein Washington University in St. Louis This version: May 2015 Introduction The best-reply correspondence of a game the mapping from beliefs over one s opponents actions to
More informationRepeated Games with Perfect Monitoring
Repeated Games with Perfect Monitoring Mihai Manea MIT Repeated Games normal-form stage game G = (N, A, u) players simultaneously play game G at time t = 0, 1,... at each date t, players observe all past
More informationAn introduction on game theory for wireless networking [1]
An introduction on game theory for wireless networking [1] Ning Zhang 14 May, 2012 [1] Game Theory in Wireless Networks: A Tutorial 1 Roadmap 1 Introduction 2 Static games 3 Extensive-form games 4 Summary
More informationPrisoner s Dilemma. CS 331: Artificial Intelligence Game Theory I. Prisoner s Dilemma. Prisoner s Dilemma. Prisoner s Dilemma.
CS 331: rtificial Intelligence Game Theory I You and your partner have both been caught red handed near the scene of a burglary. oth of you have been brought to the police station, where you are interrogated
More informationCUR 412: Game Theory and its Applications, Lecture 12
CUR 412: Game Theory and its Applications, Lecture 12 Prof. Ronaldo CARPIO May 24, 2016 Announcements Homework #4 is due next week. Review of Last Lecture In extensive games with imperfect information,
More informationpreferences of the individual players over these possible outcomes, typically measured by a utility or payoff function.
Leigh Tesfatsion 26 January 2009 Game Theory: Basic Concepts and Terminology A GAME consists of: a collection of decision-makers, called players; the possible information states of each player at each
More informationIn the Name of God. Sharif University of Technology. Microeconomics 2. Graduate School of Management and Economics. Dr. S.
In the Name of God Sharif University of Technology Graduate School of Management and Economics Microeconomics 2 44706 (1394-95 2 nd term) - Group 2 Dr. S. Farshad Fatemi Chapter 8: Simultaneous-Move Games
More informationIntroductory Microeconomics
Prof. Wolfram Elsner Faculty of Business Studies and Economics iino Institute of Institutional and Innovation Economics Introductory Microeconomics More Formal Concepts of Game Theory and Evolutionary
More informationm 11 m 12 Non-Zero Sum Games Matrix Form of Zero-Sum Games R&N Section 17.6
Non-Zero Sum Games R&N Section 17.6 Matrix Form of Zero-Sum Games m 11 m 12 m 21 m 22 m ij = Player A s payoff if Player A follows pure strategy i and Player B follows pure strategy j 1 Results so far
More informationTheir opponent will play intelligently and wishes to maximize their own payoff.
Two Person Games (Strictly Determined Games) We have already considered how probability and expected value can be used as decision making tools for choosing a strategy. We include two examples below for
More informationMicroeconomic Theory II Preliminary Examination Solutions
Microeconomic Theory II Preliminary Examination Solutions 1. (45 points) Consider the following normal form game played by Bruce and Sheila: L Sheila R T 1, 0 3, 3 Bruce M 1, x 0, 0 B 0, 0 4, 1 (a) Suppose
More informationRationalizable Strategies
Rationalizable Strategies Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt2@illinois.edu Jun 1st, 2015 C. Hurtado (UIUC - Economics) Game Theory On the Agenda 1
More information(a) Describe the game in plain english and find its equivalent strategic form.
Risk and Decision Making (Part II - Game Theory) Mock Exam MIT/Portugal pages Professor João Soares 2007/08 1 Consider the game defined by the Kuhn tree of Figure 1 (a) Describe the game in plain english
More informationCHAPTER 14: REPEATED PRISONER S DILEMMA
CHAPTER 4: REPEATED PRISONER S DILEMMA In this chapter, we consider infinitely repeated play of the Prisoner s Dilemma game. We denote the possible actions for P i by C i for cooperating with the other
More informationMicroeconomics II. CIDE, MsC Economics. List of Problems
Microeconomics II CIDE, MsC Economics List of Problems 1. There are three people, Amy (A), Bart (B) and Chris (C): A and B have hats. These three people are arranged in a room so that B can see everything
More informationCMPSCI 240: Reasoning about Uncertainty
CMPSCI 240: Reasoning about Uncertainty Lecture 23: More Game Theory Andrew McGregor University of Massachusetts Last Compiled: April 20, 2017 Outline 1 Game Theory 2 Non Zero-Sum Games and Nash Equilibrium
More informationEcon 101A Final exam Mo 18 May, 2009.
Econ 101A Final exam Mo 18 May, 2009. Do not turn the page until instructed to. Do not forget to write Problems 1 and 2 in the first Blue Book and Problems 3 and 4 in the second Blue Book. 1 Econ 101A
More informationGame Theory Tutorial 3 Answers
Game Theory Tutorial 3 Answers Exercise 1 (Duality Theory) Find the dual problem of the following L.P. problem: max x 0 = 3x 1 + 2x 2 s.t. 5x 1 + 2x 2 10 4x 1 + 6x 2 24 x 1 + x 2 1 (1) x 1 + 3x 2 = 9 x
More informationECONS 424 STRATEGY AND GAME THEORY MIDTERM EXAM #2 ANSWER KEY
ECONS 44 STRATEGY AND GAE THEORY IDTER EXA # ANSWER KEY Exercise #1. Hawk-Dove game. Consider the following payoff matrix representing the Hawk-Dove game. Intuitively, Players 1 and compete for a resource,
More informationName. FINAL EXAM, Econ 171, March, 2015
Name FINAL EXAM, Econ 171, March, 2015 There are 9 questions. Answer any 8 of them. Good luck! Remember, you only need to answer 8 questions Problem 1. (True or False) If a player has a dominant strategy
More information6.207/14.15: Networks Lecture 9: Introduction to Game Theory 1
6.207/14.15: Networks Lecture 9: Introduction to Game Theory 1 Daron Acemoglu and Asu Ozdaglar MIT October 13, 2009 1 Introduction Outline Decisions, Utility Maximization Games and Strategies Best Responses
More informationGame Theory - Lecture #8
Game Theory - Lecture #8 Outline: Randomized actions vnm & Bernoulli payoff functions Mixed strategies & Nash equilibrium Hawk/Dove & Mixed strategies Random models Goal: Would like a formulation in which
More informationEcon 323 Microeconomic Theory. Practice Exam 2 with Solutions
Econ 323 Microeconomic Theory Practice Exam 2 with Solutions Chapter 10, Question 1 Which of the following is not a condition for perfect competition? Firms a. take prices as given b. sell a standardized
More informationMA200.2 Game Theory II, LSE
MA200.2 Game Theory II, LSE Answers to Problem Set [] In part (i), proceed as follows. Suppose that we are doing 2 s best response to. Let p be probability that player plays U. Now if player 2 chooses
More informationCompetition for goods in buyer-seller networks
Rev. Econ. Design 5, 301 331 (2000) c Springer-Verlag 2000 Competition for goods in buyer-seller networks Rachel E. Kranton 1, Deborah F. Minehart 2 1 Department of Economics, University of Maryland, College
More informationMixed Strategies. Samuel Alizon and Daniel Cownden February 4, 2009
Mixed Strategies Samuel Alizon and Daniel Cownden February 4, 009 1 What are Mixed Strategies In the previous sections we have looked at games where players face uncertainty, and concluded that they choose
More informationEconomics and Computation
Economics and Computation ECON 425/563 and CPSC 455/555 Professor Dirk Bergemann and Professor Joan Feigenbaum Reputation Systems In case of any questions and/or remarks on these lecture notes, please
More informationIntroduction to Game Theory
Introduction to Game Theory Presentation vs. exam You and your partner Either study for the exam or prepare the presentation (not both) Exam (50%) If you study for the exam, your (expected) grade is 92
More informationEcon 323 Microeconomic Theory. Chapter 10, Question 1
Econ 323 Microeconomic Theory Practice Exam 2 with Solutions Chapter 10, Question 1 Which of the following is not a condition for perfect competition? Firms a. take prices as given b. sell a standardized
More informationDuopoly models Multistage games with observed actions Subgame perfect equilibrium Extensive form of a game Two-stage prisoner s dilemma
Recap Last class (September 20, 2016) Duopoly models Multistage games with observed actions Subgame perfect equilibrium Extensive form of a game Two-stage prisoner s dilemma Today (October 13, 2016) Finitely
More informationANASH EQUILIBRIUM of a strategic game is an action profile in which every. Strategy Equilibrium
Draft chapter from An introduction to game theory by Martin J. Osborne. Version: 2002/7/23. Martin.Osborne@utoronto.ca http://www.economics.utoronto.ca/osborne Copyright 1995 2002 by Martin J. Osborne.
More informationAS/ECON 2350 S2 N Answers to Mid term Exam July time : 1 hour. Do all 4 questions. All count equally.
AS/ECON 2350 S2 N Answers to Mid term Exam July 2017 time : 1 hour Do all 4 questions. All count equally. Q1. Monopoly is inefficient because the monopoly s owner makes high profits, and the monopoly s
More informationSimon Fraser University Spring 2014
Simon Fraser University Spring 2014 Econ 302 D200 Final Exam Solution This brief solution guide does not have the explanations necessary for full marks. NE = Nash equilibrium, SPE = subgame perfect equilibrium,
More informationAlgorithms and Networking for Computer Games
Algorithms and Networking for Computer Games Chapter 4: Game Trees http://www.wiley.com/go/smed Game types perfect information games no hidden information two-player, perfect information games Noughts
More informationIntroduction to Industrial Organization Professor: Caixia Shen Fall 2014 Lecture Note 5 Games and Strategy (Ch. 4)
Introduction to Industrial Organization Professor: Caixia Shen Fall 2014 Lecture Note 5 Games and Strategy (Ch. 4) Outline: Modeling by means of games Normal form games Dominant strategies; dominated strategies,
More informationECON 803: MICROECONOMIC THEORY II Arthur J. Robson Fall 2016 Assignment 9 (due in class on November 22)
ECON 803: MICROECONOMIC THEORY II Arthur J. Robson all 2016 Assignment 9 (due in class on November 22) 1. Critique of subgame perfection. 1 Consider the following three-player sequential game. In the first
More informationMicroeconomic Theory II Preliminary Examination Solutions Exam date: June 5, 2017
Microeconomic Theory II Preliminary Examination Solutions Exam date: June 5, 07. (40 points) Consider a Cournot duopoly. The market price is given by q q, where q and q are the quantities of output produced
More informationEconomics 109 Practice Problems 1, Vincent Crawford, Spring 2002
Economics 109 Practice Problems 1, Vincent Crawford, Spring 2002 P1. Consider the following game. There are two piles of matches and two players. The game starts with Player 1 and thereafter the players
More informationIn reality; some cases of prisoner s dilemma end in cooperation. Game Theory Dr. F. Fatemi Page 219
Repeated Games Basic lesson of prisoner s dilemma: In one-shot interaction, individual s have incentive to behave opportunistically Leads to socially inefficient outcomes In reality; some cases of prisoner
More informationStochastic Games and Bayesian Games
Stochastic Games and Bayesian Games CPSC 532L Lecture 10 Stochastic Games and Bayesian Games CPSC 532L Lecture 10, Slide 1 Lecture Overview 1 Recap 2 Stochastic Games 3 Bayesian Games Stochastic Games
More information6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2
6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2 Daron Acemoglu and Asu Ozdaglar MIT October 14, 2009 1 Introduction Outline Review Examples of Pure Strategy Nash Equilibria Mixed Strategies
More informationEconomics 171: Final Exam
Question 1: Basic Concepts (20 points) Economics 171: Final Exam 1. Is it true that every strategy is either strictly dominated or is a dominant strategy? Explain. (5) No, some strategies are neither dominated
More informationMicroeconomics of Banking: Lecture 5
Microeconomics of Banking: Lecture 5 Prof. Ronaldo CARPIO Oct. 23, 2015 Administrative Stuff Homework 2 is due next week. Due to the change in material covered, I have decided to change the grading system
More information