Computing equilibria in discounted 2 2 supergames

Size: px
Start display at page:

Download "Computing equilibria in discounted 2 2 supergames"

Transcription

1 Computational Economics manuscript No. (will be inserted by the editor) Computing equilibria in discounted 2 2 supergames Kimmo Berg Mitri Kitti Received: date / Accepted: date Abstract This paper examines the subgame perfect pure strategy equilibrium paths and payoff sets of discounted supergames with perfect monitoring. The main contribution is to provide methods for computing and tools for analyzing the equilibrium paths and payoffs in repeated games. We introduce the concept of a first-action feasible path, which simplifies the computation of equilibria. These paths can be composed into a directed multigraph, which is a useful representation for the equilibrium paths. We examine how the payoffs, discount factors and the properties of the multigraph affect the possible payoffs, their Hausdorff dimension, and the complexity of the equilibrium paths. The computational methods are applied to the twelve symmetric strictly ordinal 2 2 games. We find that these games can be classified into three groups based on the complexity of the equilibrium paths. Keywords repeated game 2 2 game subgame perfect equilibrium equilibrium path payoff set multigraph 1 Introduction Supergames provide an elementary framework for examining competition and cooperation in long-term relationships. It is well-known that the equilibria of the stage game differ fundamentally from both the finitely and infinitely repeated games (Benoit and Krishna 1985; Mailath and Samuelson 2006). Our first objective is to present methods for computing and analyzing equilibrium paths and payoffs in discounted supergames. The second is to apply these K. Berg Aalto University School of Science, Systems Analysis Laboratory, P.O.Box 11100, FI Aalto, Finland, kimmo.berg@tkk.fi, Tel.: , Fax: M. Kitti Aalto University School of Economics, Department of Economics, P.O.Box 21240, FI Aalto, Finland

2 2 Kimmo Berg, Mitri Kitti methods to the symmetric 2 2 games which constitute an important class of games as they capture a wide range of applications in economic, social and biological sciences (Maynard Smith 1982; Axelrod 1984; Hauert 2001; Brams 2003). The main ideas of this paper stem from a recent work by Berg and Kitti (2011), who analyze paths of action profiles that are induced by subgame perfect pure strategies in infinitely repeated discounted games with perfect monitoring. They show that it is possible to characterize the equilibrium paths with fragments called elementary subpaths. Their characterization builds upon the set-valued recursive methods developed by Abreu et al. (1986, 1990); Cronshaw and Luenberger (1994). This paper gives an alternative approach using only paths. This approach is based on Abreu (1988), who characterizes the equilibria using simple strategies and the one-shot deviation principle. The first step in constructing the elementary subpaths is the computation of the first-action feasible (FAF) paths. These are finite paths that provide high enough payoffs such that none of the players is willing to deviate from the first action profile on the path. To illustrate the idea of a FAF path, let us assume that there are two action profiles a and b in the stage game that is being repeated. If bbaa is a FAF path then the sequence baa combined with any equilibrium payoff guarantee that there are no profitable deviations from the first action b. This does not, however, mean yet that bbaa could be a part of an equilibrium path, since there might be profitable deviations from the other parts of the path, like the second b or the a s. The second step is to construct the equilibrium paths from the FAF paths, i.e., find the elementary paths of the game or a representation for them. The idea is to combine the FAF paths so that the subgame perfect equilibrium (SPE) paths are obtained. For example, we cannot combine bbaa with bab since the first requires that the path after ba continues with a and the second requires it continues with b. We can, however, combine FAF paths bbaa and ba, and thus there are no profitable deviations from both b s in bbaa. It turns out that there is a simple algorithm for combining the FAF paths, and we can represent the equilibrium paths with a directed multigraph. The multigraph representation allows us to analyze and compute the payoff set and the equilibrium paths of the game. We find that the payoff set is a graph directed self-affine set, i.e., a particular fractal. It is possible to estimate the Hausdorff dimension of the payoff set using tools developed for this kind of fractals (Mauldin and Williams 1988; Edgar and Mauldin 1992; Edgar and Golds 1999; Falconer 1988, 1992). Intuitively, the Hausdorff dimension measures the complexity of the set; how the set fills the payoff space. It is important to distinguish the difference of paths and payoffs: the complexity of the game is in the paths and the payoff set is a mapping from these paths. We find that the Hausdorff dimension is related to the cycles and the contractions of the arcs in the multigraph. The dimension is zero if there is no node with more than one cycle. It may happen that the paths increase but the cycles remain the same, and thus, the paths affect the dimension only through the cycles. On the contrary, the discount factor affects directly all the

3 Equilibria in 2 2 supergames 3 contractions, which means that the dimension depends only on the discount factor if the cycles do not change. Another advantage of the multigraph representation is that it can be used in finding an approximation for the set of equilibrium payoffs. The computation of the payoff set has been previously studied in Cronshaw and Luenberger (1994); Cronshaw (1997); Judd et al. (2003). These papers, however, assume that the players use correlated strategies, which convexifies the payoff set. In this paper, we do not assume correlated nor mixed strategies. The difference to earlier results comes from the methodology, i.e., we focus on specific paths that generate the payoff set rather than try to find the payoff set as a whole. As an application of the computational methods, we examine all the twelve symmetric strictly ordinal 2 2 games (Robinson and Goforth 2005) for different discount factors and payoff values. We find that the payoff sets and the complexity of the equilibrium paths are quite different in the twelve games. They are roughly classified into three groups based on the complexity of the paths: i) prisoner s dilemma, stag hunt, chicken and no conflict games have more complex equilibrium paths than the others, where all four actions can be played with suitable combinations; ii) the paths in leader, battle of the sexes, coordination and anti-coordination games consist mainly of repetition of the two stage game s Nash equilibria; iii) the equilibrium in the rest of the anti-games is the repetition of the stage game s only Nash equilibrium. The paper is structured as follows. Section 2 characterizes the subgame perfect equilibria. In Section 3, we define the first-action feasible paths and develop algorithms to compute the FAF paths. We also construct and analyze the multigraph and the payoff set. The symmetric 2 2 supergames are examined in Section 4. The results are discussed in Section 5. 2 Subgame perfect equilibria 2.1 Discounted supergames We assume that there are n players. The set of players is N = {1,...,n}. The actions available for player i in the stage game is A i. Sets A i, i N, are assumed to be finite. The set of action profiles in the stage game is denoted as A = i A i. As usual,a i denotes the actions of other players than playeri, and the corresponding set of action profiles is A i = j i A j. Function u : A R n gives the vector of payoffs that the players receive in the stage game when a given action profile is played, i.e., if a A is played then player i receives payoff u i (a). In the supergame the stage game is infinitely repeated, and the players discount the future payoffs with discount factors δ i, i N. We assume perfect monitoring: all players observe the action profile played at the end of each period. A history contains the path of action profiles that have previously been played in the game. The set of length k histories or paths is denoted as A k = k A. The empty path is, and the initial history is the null set, i.e.,

4 4 Kimmo Berg, Mitri Kitti A 0 = { }. Infinitely long paths are denoted as A. When referring to the set of paths beginning with a given action profile a we use A k (a) and A (a) for length k paths and infinitely long paths, respectively. Moreover, A is the set of all paths, finite or infinite, and A(a) is the set of all paths that start with a, i.e., union of A k (a), k = 1,2,... and A (a). A strategy for playeriin the supergame is a sequence of mappingsσi 0,σ1 i,... where σi k : Ak A. The set of strategies for player i is Σ i. The strategy profile consisting of σ 1,...,σ n is denoted as σ. Given a strategy profile σ and a path p, the restriction of the strategy profile after p is is σ p. The outcome path, simply path, that σ induces is (a 0 (σ),a 1 (σ),...) A, where a k (σ) = σ k (a 0 (σ) a k 1 (σ)) for all k. The average discounted payoff for player i corresponding to strategy profile σ is U i (σ) = (1 δ i ) δi k u i(a k (σ)). Subgame perfection is defined in the usual way; σ is a subgame perfect equilibrium (SPE) of the supergame if k=0 U i (σ p) U i (σ i,σ i p) i N, k 0, p A k, and σ i Σ i. This paper focuses on subgame perfect equilibrium paths (SPEP), which are the paths p A that are induced by the pure strategy SPE profiles σ. 2.2 Equilibrium conditions for SPE paths Subgame perfect equilibrium strategies can be extremely complex. However, the analysis of equilibrium paths is simplified by the fact that any equilibrium behavior is supported by the threat of extremal punishments, i.e., those punishments which lead to players smallest SPE payoffs (Abreu 1988). This means that to check whether a given path of action profiles is SPE, it only needs to be shown that at any stage in the path there are no profitable one-shot deviations when the deviations lead to extremal punishments. The composition of paths leading to the extremal punishments is called the extremal penal code. The idea of the extremal penal code has been recently utilized for more general dynamic games (Kitti 2010, 2011). In the following, the least equilibrium payoffs, i.e., the extremal punishments, are denoted as v i = min{v i : v V}, where V is the set of SPE payoffs. As mentioned, the SPE paths are characterized by the fact that there are no profitable one-shot deviations at any stage. Thus, a path p = a 0 (σ)a 1 (σ) induced by σ is a SPE path if and only if (1 δ i )u i ( a k (σ) ) +δ i v k i max a i A i [ (1 δi )u i ( ai,a k i (σ)) +δ i v i ], (1)

5 Equilibria in 2 2 supergames 5 for all i N, k 0, and where v k i = (1 δ i) j=0 δ j i u ( i a k+1+j (σ) ) is the continuation payoff after a k (σ). The condition (1) means that the action taken at any stage is incentive compatible (IC) for all players, i.e., at any stage all players prefer taking the action prescribed by the SPE path than deviating and then receiving the extremal punishment payoff. We note that it is possible that there are no subgame perfect equilibria in pure strategies, i.e., V =. This happens, for example, in the game of matching pennies. A sufficient condition that V is that the stage game has a pure strategy Nash equilibrium. In the numerical examples of this paper, this is not a problem. The examples always have subgame perfect equilibria, and we know the minimal payoffs corresponding to the extremal penal codes. 3 Algorithms to compute equilibrium paths and payoff sets 3.1 First-action feasible paths We define a first-action feasible (FAF) path as a finite path whose first action profile is incentive compatible as long as the final element of the path is followed by another SPE path. FAF paths will play a central role in this paper and they will be used in constructing SPE paths. Forp A, let us definep j as the path that starts from the elementj+1, and p k is the path of the first k elements of path p. For example, when p = a 0 a 1... we have p 1 = a 1 a 2..., p k = a 0...a k 1 and p k j = aj...a j+k 1. The length of path p is denoted as p, i(p) is the initial and f(p) the final element of p. If p is infinitely long, then f(p) =. If p and p are two paths then pp is the path obtained by juxtaposing the terms of p and p. Let us denote the action profiles by A = {a,b,c,d}. Moreover, a is the infinite repetition of action profile a A, and a N means that a is repeated arbitrarily many time, here N = {0,1,...}. For example, (bc) 2 = bcbc means that path bc is repeated twice. For each action profile a A, it is possible to define the least payoff in V which makes a incentive compatible. This payoff vector is denoted as con(a), where con i (a) is the solution of (1 δ i )u i (a)+δ i con i (a) = max a i A i [(1 δ i )u i (a i,a i )+δ i v i ]. Moreover, we define the least continuation for p = p k 1 a A k, k 2, a A, by con i (p) = δ 1 [ i coni (p k 1 ) (1 δ i )u i (a) ]. This gives us con(p) which is the continuation payoff that is required after f(p) to make the first action profile of p incentive compatible. If path p requires

6 6 Kimmo Berg, Mitri Kitti less than what the last action requires, i.e., con(p) con(f(p)), then p is a FAF path. The condition means that all SPE paths that start from f(p) are possible continuations to path p. These finite paths make the structure of equilibrium paths recursive. When the length of path p is one, it is a FAF path if con(p) v. This means that any SPE path may follow path p. Definition 1 A finite path p A k is a FAF path if con(p) con(f(p)), when k 2, con(p) v, when k = 1. (2) We note that FAF paths can be interpreted as finite filters that find equilibrium patterns in infinite paths. FAF paths can be used in verifying that a path is SPE without examining the whole infinite continuation path in each stage. The following example demonstrates how we can utilize FAF paths. Example 1 Let us examine if a given path p = (abba) is a SPE path when a, ba and bbaa are FAF paths in the game. We need to show that there are no profitable one-shot deviations in any stage, i.e., the payoff on the path is greater than maximum deviation plus punishment payoff after the deviation. The whole infinite path needs to be checked but as the path is recursive, only four variations need to be checked: the starts from both a s and both b s. Both a s are IC as a is FAF path, first b is IC as bbaa is FAF path, and the second b is IC as ba is FAF path. Thus, the path p is a SPE path. In addition to testing whether a path is FAF, we can rather easily check if it cannot be FAF. Definition 2 A finite path p A k (a) is a first-action infeasible (FAI) path if where v i = max{v i : v V}, i N. con i (p) > v i, for some i N, (3) The FAI paths are not incentive compatible no matter what SPE path follows them. If the largest SPE payoff v i for player i is not known, then we can use the maximum payoff in the stage game. We can now classify any finite path as either FAF, FAI, or neither FAF nor FAI. In the latter case, we say that a path is neutral (N). Thus, all finite paths are either FAF, FAI or N paths. Moreover, we examine only the shortest FAF and FAI paths, since pp is a FAF (FAI) path if p is a FAF (FAI) path and p is any SPE path (p is any path). Algorithm 1 classifies the finite paths using the breadth-first search (BFS). 3.2 Constructing multigraph It is possible that the FAF paths contain parts that are infeasible. For example, bdd may be a FAF path but dd is a FAI path, which means that bdd cannot be part of an equilibrium path. When we remove these infeasible FAF paths,

7 Equilibria in 2 2 supergames 7 Algorithm 1: BFS to find FAF paths input : u, v, v, δ and maximum path length. output: FAF, FAI and N paths up to the given path length. begin Add paths a, b, c and d to queue q. Set queue counter i = 1. while q(i) maximum path length do if q(i) satisfies Eq. (2) then q(i) is a FAF path. else if q(i) satisfies Eq. (3) then q(i) is a FAI path. else add paths q(i)a, q(i)b, q(i)c and q(i)d to queue q. if end of queue then Stop. Otherwise, set i = i + 1. we get the elementary paths of the game (Berg and Kitti 2011), which are the fragments that generate the SPE paths. This can be done at the same time when we form a directed multigraph representation of the SPE paths. The multigraph consists of states and transitions between the states, and it is constructed from the FAF paths. The FAF paths are represented as a tree, and the nodes in the tree are the states of the multigraph. The tree also gives the directed arcs between the states, except for the leaf nodes. For the leaf nodes, we find the smallest k 1 such that p k is found in the tree. If this is not found and the longest part of p k is an inner node of the tree, then there is no continuation to p. The path p is then an infeasible FAF path, and the state and the arcs pointing to it are removed. This guarantees that there are no profitable deviations in any part of the constructed path. The graph is simplified by removing the states that have only one destination and are not children of the root node. The arcs pointing to these removed states are redirected to the new destinations, which makes the graph as a multigraph, i.e., there may be multiple arcs between the states. Algorithm 2 constructs the multigraph. Example 2 Assume that a, ba and bbaa are the only FAF paths in the game. The tree of FAF paths is presented in Fig. 1, where the root node is the empty history. The tree is the starting point in forming the multigraph. The purpose is to find the destinations for the FAF paths: a links a and b, and ba links to a. For p = bbaa, we first try to find p 1 = baa in the tree. Since it is not found and ba is a FAF path, there are no profitable deviations from the first two b s in bbaa. We then try to find p 2 = aa, which is also not found. Hence, bba can be played only if a follows, i.e., if bbaa is played. Finally, we find p 3 = a in the tree, which is the destination of bba. Thus, we remove node bbaa and link bba to a. After the simplification, we get the multigraph in Fig. 1. The arc labels, e.g., baa, denote the actions that are played when the arc is traversed. If the label is empty, it means that the destination node s action is played when the arc is traversed. We note that there are many SPE paths in addition to (abba), e.g., a and (ba) are SPEPs. Directed multigraphs have been previously utilized in the framework of supergames for analyzing the complexity of strategies (Rubinstein 1986; Abreu

8 8 Kimmo Berg, Mitri Kitti a b ba bb bba bbaa a baa b (a) Tree of FAF paths (b) Multigraph of SPE paths Fig. 1 Tree of FAF paths and the multigraph representation. Algorithm 2: Constructing multigraph input : Tree of FAF paths. output: Multigraph of SPE paths. define: lcp(p) is the node of the longest common path with p in the tree. begin 1. Form the nodes. They are the nodes in the tree. 2. Form the arcs. for p=nodes in the tree do if p is an inner node then form arcs to its children. else if p is a leaf node connected to the root node then form arcs to the children of the root node. else for k 1 to p 1 do if p k found in tree then remove node p and form an arc from p p 1 to node p k. else if lcp(p k ) is an inner node then part of p is infeasible. Remove node p. else set k = k Label the arcs with the action profile that is played when that arc is traversed. Simplify the graph by removing the nodes that are not connected to the root node and have only one outgoing arc. Reroute the removed node s incoming arcs to the node s only destination. Relabel the arcs with paths of action profiles that are played when that arc is traversed. and Rubinstein 1988). For that purpose strategies are represented by finite automata which can be visualized with graphs. While an automaton represents a single strategy, the multigraphs of this paper represent the sets of SPEPs. We emphasize that the multigraph representation of SPEPs should not be confused with an automaton.

9 Equilibria in 2 2 supergames 9 Algorithm 3: DFS to find the elementary cycles input : Multigraph and maximum visit length. output: Elementary cycles for each node. Push the children of the root node with the visited lists to the stack. while stack not empty do pop stack. Traverse the arcs of the node: if the destination is on the visited list then add the cycle to all nodes in the cycle. else if length of visited list < maximum visit length then push destination node with the new visited list to the stack. Algorithm 4: DFS to plot an approximate payoff set input : Multigraph, cycles, maximum path length and maximum points. output: Approximation of the payoff set. Set point counter i = 0. Push the root node to the stack. while stack not empty do pop stack. Traverse the arcs of the node: Add the played action(s) to the path p. if p maximum path length then for s =cycles of f(p) do if i maximum points then Plot the payoff of p plus the infinite cycle of s. Set i = i+1. Push the destination node and the path p to the stack. 3.3 Payoff set from multigraph The payoff set usually consists of infinitely many points, and now we describe how to form an approximation of this set. There is a simple algorithm for generating the SPE paths and payoffs by using the multigraph and its cycles. Namely, we can form infinite paths by combining finite paths from the multigraph with infinite cycles starting from the last state of the finite path. Algorithm 3 finds the elementary, or minimal, cycles of the multigraph by the standard Tarjan algorithm (Tarjan 1972) which uses the depth-first search (DFS). The algorithm limits the number of visited nodes as it takes long time to search the multigraphs with hundreds of nodes. The approximation of the payoff set depends on the number of points, the maximum length of the finite path, the infinite cycles, and the order in which the nodes of the multigraph are visited. There are two basic search orders: the breadth-first search (BFS) and the depth-first search (DFS). The BFS gives, in general, a good approximation of the payoff set, whereas the DFS approximates a specific part of the payoff set. Algorithm 4 gives the DFS version, and the BFS version is exactly the same except a queue is used instead of the stack. We note that it may also be a good idea to limit the number of cycles when a single state has hundreds of cycles.

10 10 Kimmo Berg, Mitri Kitti 3.4 Dimension estimates It is possible to analyze the properties of the payoff set by using the constructed multigraph. The payoff set is a fractal and we can estimate the Hausdorff dimension with the graph directed constructions of Mauldin and Williams (1988). The estimation of the exact payoff set dimension is, however, complicated because of the possibility of overlapping payoffs. For this reason, we distinguish two dimension estimates: the payoff set dimension, which can be computed only in special cases, and the path dimension with no overlaps. The path dimension can always be computed and it serves as an upper bound for the payoff set dimension. These notions will be demonstrated with numerical examples in Section 4.2. Berg and Kitti (2011) observe that the set of SPE payoffs is a fixed-point of a particular iterated function system (IFS) and consequently a fractal. A widely studied class of fixed-points of IFS are self-affine sets. A self-affine set S satisfies S = B a (S), a A where B a, a A, are affine contractions. The payoff set would be a selfaffine set if there were no incentive compatibility constraints. In that case B a (v) = (I T)u(a)+Tv, where T is a diagonal matrix with discount factors on its diagonal. As observed by Berg and Kitti (2011), the payoff set is not necessarily self-affine but rather a subset of a self-affine set. Let us now formulate the payoff set as a graph directed construction. An IFS directed by a multigraph consists of nodes M, arcs E qr, i.e., the arcs from q to r, and functions f p, p E qr. The invariant set list is defined as V q = r M p E qr f p (V r ), for all q M, where f p corresponds to the affine mapping of arc p, e.g., f p = B a B b B c when p = abc is played on the arc. Furthermore, the payoff set is the union of the invariant set lists, i.e., V = {V q : q M} = V. The Hausdorff dimension of the payoff set is estimated by associating a positive number r p to each arc p E qr. If we assume that the players use the same discount factor δ, we can define r p = δ p, e.g., r p = δ 3 when p = abc is played on the arc. Let L(s) be the matrix with L qr (s) = p E qr r s p, and Φ(s) = ρ(l(s)) is the spectral radius of L(s), i.e., ρ(l) = max i λ i, where λ is the eigenvalue of L. Then the unique solution s 1 0 of Φ(s 1 ) = 1 is the Hausdorff dimension of the Mauldin-Williams graph when the open set condition is satisfied (Mauldin and Williams 1988; Edgar and Mauldin 1992), i.e., the subsets do not overlap. The open set condition is satisfied in supergames when the discount factor is less than 1/2. When the discount factor is higher, it is in general difficult to find the exact dimension. It is, however, possible to model the overlaps and

11 Equilibria in 2 2 supergames 11 give lower and upper bound estimates (Edgar and Golds 1999; Ngai and Wang 2001). The dimension of the SPE paths can, however, be computed since the overlap is of no concern for the tree structures (Rellick et al. 1991). The unique solution s of w s q = p E qr r s p ws r, (4) is the Hausdorff dimension of the multigraph. Here w q, q M, are positive numbers. This equation has a more simple form if each arc is associated with only one action profile, i.e., if we skip part 3 of Algorithm 2. If we denote P as the adjacency matrix of the graph, then the Hausdorff dimension s satisfies (I δ s P)w = 0, where w is a vector of positive numbers with the size of the number of states in the graph. Now, we can see that the Hausdorff dimension is related to the eigenvalues of the weighted adjacency matrix P, which are studied in the spectral graph theory. If the multigraph is not strongly connected, then the dimensions of the subgraphs may differ. In that case the dimension of the graph is the maximum dimension of the subgraphs. We can also simplify Eq. (4) when the multigraph has only one node. Then the dimension is given by (Rellick et al. 1991) 1 = p Cr s p, where C is the set of cycles from the single node. Besides the Hausdorff dimension, we can numerically analyze the payoff sets and the SPE paths. For example, we can examine how the payoff set covers the different parts of the payoff space, and how high the payoffs are for the players in the game. We can also measure how many different SPE paths there are by examining the versatility of the paths. One way to examine this is to compute the entropy of the different action profiles in certain length SPE paths. For example, a game where three actions can be played has higher entropy than a game where only two actions can be played. Finally, we may analyze the number of states and cycles in the multigraph, and the number and length of FAF paths in the game. 4 Equilibria in symmetric 2 2 supergames 4.1 Classification of 2 2 games The 2 2 games can be classified into 144 strict ordinal games of which 12 are symmetric (Robinson and Goforth 2005); see Rapoport and Guyer (1966); Kilgour and Fraser (1988); Walliser (1988) for earlier taxonomies. The twelve symmetric ordinal games are presented in Figure 2. The two strategies are C

12 12 Kimmo Berg, Mitri Kitti R P T 1 Prisoner s Dilemma 5 Stag Hunt Coordination 9 10 anti Coordination 2 Leader 3 Hawk Dove Chicken No Conflict Harmony 6 7 anti Harmony 11 anti Stag Hunt 4 Battle of Sexes 8 anti Hawk Dove 12 anti PD S C D C R T D S P P R Fig. 2 Symmetric ordinal games. (cooperate) and D (defect), which give the players the payoffs R, S, T and P; these also correspond to the action profiles a, b, c and d, respectively. Each of the twelve regions represent a certain class of games: 1. prisoner s dilemma, 2. hawk-dove or chicken, 3. leader, 4. battle of the sexes, 5. stag hunt, 6. no conflict or harmony, 9. coordination and their anti-games. The twelve symmetric games describe very different strategic situations and these include the seven most studied 2 2 games (Robinson and Goforth 2005); these are games 1-5 and The prisoner s dilemma is the most famous 2 2 game, and it demonstrates a situation where the players independent rational choice (defect) leads to a Pareto-inefficient outcome. The game is special because it has a unique and inferior Nash equilibrium. Games 2-5 and 9-10 have two Nash equilibria. The game of chicken, which is also known as hawk-dove or snowdrift game, is a model of conflict where both players prefer to defect. The outcome of both players defecting is the worst possible and it is not an equilibrium. The question then is who chickens out and cooperates. Games 3-4, which include the battle of the sexes, differ from the game of chicken so that the payoffs in the two equilibria are the best in the game. The games of coordination are 5 and 9-10 where one equilibrium dominates the other for both players. Stag hunt, for example, describes a conflict between safety and cooperation. One equilibrium is payoff dominant while the other is risk dominant. 4.2 Analysis of 2 2 supergames In this section, we examine the equilibrium paths and payoffs for different discount factors in the 2 2 supergames. The twelve symmetric games can be classified into three groups, four games each, based on the dimension of

13 Equilibria in 2 2 supergames 13 the SPE paths. The most interesting games are prisoner s dilemma, chicken, stag hunt and no conflict games. In these games, all four action profiles can be played in suitable sequences and the SPE payoffs cover large portion of feasible payoffs when the discount factor is moderate. The second group consists of games, where only the two stage game s Nash equilibria can be repeated in arbitrary order. The third group consists of anti-games, where there is only one subgame perfect equilibrium, i.e., the repetition of the Nash equilibrium. Let us normalize the payoffs on the diagonals, i.e., R=5 and P=2. The payoffs T and S are chosen in the following way: pairs (S,T) and (T,S) take values (1,7), (3,7), (6,7), (1,4), (3,4) and (0,1). For example, a pair (S,T)=(1,7) is a prisoner s dilemma and (S,T)=(3,7) is a game of chicken. The anti-games are defined by interchanging the values of S and T. The sensitivity analysis with respect to the payoff values is examined more thoroughly in the next section. The third group consists of anti-games 7-8 and These games have only one subgame perfect equilibrium, which is the repetition of stage game s Nash equilibrium. The payoff set and path dimensions in these games are naturally zero. We note, however, that the payoff values affect the number of equilibria in games 8 and 12. It is possible that these games have more than one subgame perfect equilibrium, e.g., when the value of S is higher. The second group, games 3-4 and 9-10, behaves the same way for wide range of discount factors. The stage games have two equilibria, and the SPE paths are arbitrary combinations of these two equilibria. In games 3-4, actions b and c are repeated and all SPE paths are of form (b N c N ). In games 9-10, actions a and d are repeated and the paths are (a N d N ). When δ 1/2, the payoff set consists of isolated points between the two payoff values, i.e., between b and c in games 3-4, and between a and d in games The value δ = 1/2 is the limit when the payoff set fills the line between the two points, and the Hausdorff dimension of the payoff set gets to the value 1. After this, the payoff set and its dimension remain the same even if δ is increased. This is also the limit when the geometric payoff set dimension breaks from the path dimension. This can be interpreted so that the dimension of the paths increases but the payoff set remains the same, i.e., multiple paths give the same payoff value. The SPE paths and payoff sets change when it is possible to play actions outside the two equilibria. This happens when the discount factor is high enough. In game 3, the limit of δ is the highest, and the path a(bc) is the first new SPE path when δ 13/ In game 4, it is possible to play a with a suitable combination of b and c when δ In games 9-10, the limits are lower as the payoff of equilibrium d, i.e., P=2, is much lower than the other equilibrium a, i.e., R=5. Moreover, it is possible to play ba and ca in game 9 (10) when δ 4/ (5/8 0.63). The Hausdorff dimensions of the SPE paths are given in Table 1. The values are exact up to the above limits of 0.81, 0.67, 0.57 and 0.63 for games 3, 4, 9 and 10, respectively. For higher δ, the values are lower bound estimates as the lengths of FAF paths are restricted to eight ( ) and twelve ( ) for computational reasons. The payoff

14 14 Kimmo Berg, Mitri Kitti Table 1 The path dimensions for different discount factors. game\δ Sierpinski upper bound set dimensions are the same as the path dimensions when δ 1/2. When the discount factor is between 1/2 and the limits, the payoff set dimension is exactly one. When the discount factor is higher, it is difficult to estimate the exact payoff set dimensions due to overlaps. Games 1-2 and 5-6 have the highest path dimensions. These games lead to the most interesting supergames in the sense that the equilibria are more than just repetitions of the stage game s Nash equilibria, and the payoff sets are more than isolated points or lines between the two payoff values; see Fig. 3. The solid lines in the figure are the payoff requirements for the defect (D) column, i.e., the punishment payoffs, and the dashed lines are the payoff requirements for the cooperate (C) column. We can clearly see the fractal nature, i.e., the payoff sets consist of similar parts. We also note that the payoff sets are quite different: i) chicken fills the line between b and c, and the upper triangle towards a, ii) stag hunt and no conflict fill the other, cooperative diagonal from the lower left corner to the upper right corner, which means that there is a mutual gain in the players payoffs, and iii) prisoner s dilemma fills evenly the payoff space, except the Pareto efficient frontier which has most of the holes. When δ < 0.2, it is only possible to play the Nash equilibrium d in the prisoner s dilemma. When 0.2 δ < 0.4, it is also possible to play (bc), i.e., all SPE paths are of form d N, (bc) and symmetric d N, (cb), where d N, denotes that d can be played finitely or infinitely many times. When δ 0.4, it is possible to play a, ba, and bdca. Thus, when δ = 0.4 all paths are d N, b 0,1 (cb) N, c 0,1 a, d N b 0,1 (cb) N dca and d N c 0,1 (bc) N dba, where b 0,1 denotes that b is either played once or not at all. In the chicken, it is only possible to play combinations of b and c when δ < 0.4. When 0.4 δ < 0.5, it is also possible to play d followed by a suitable combination of b s and c s. For example, dbcccc, dbcccbc and dbcccbbc are elementary paths when δ = 0.4. When δ 0.5, paths a, da, dba, dbca and abcb can be played. Moreover, dbcc and dbcd are elementary paths. In the stag hunt, it is possible to play ba besides the two equilibria, when δ 1/4. When the discount factor increases, b or c can be played when it is

15 Equilibria in 2 2 supergames Prisoner s dilemma, δ = 0.57 Chicken, δ = Stag hunt, δ = 0.5 No conflict, δ = 0.5 Fig. 3 The payoff sets in games 1-2 and 5-6 for δ = 0.5 and δ = followed by a suitable combination of a s and d s. For example, baa, bab, bac and bad are elementary when δ = 0.4. The no conflict game is interesting because it has a dominant strategy in which both players cooperate, and the Nash equilibrium gives the highest payoff. It is, however, possible to punish the other player when δ 1/2. The punishment paths are b and c, which give the punishment payoff 3. When δ = 1/2, the elementary paths are a, bb, baa, bab, bad, daa, dad, bac, bca, dab and dba. Path baa means that it is possible to play b as long as two a s follow. Paths daa and dad mean that the worst outcome d can be played if aa or ad follows. The path dimensions for games 1-2 and 5-6 are actually quite high. We have calculated two dimension estimates as a comparison in Table 1. The upper bound is the absolute dimension limit in 2 2 games and it corresponds to a multigraph with four nodes, which can be repeated in any order. For example, the combinations of four affinely independent points fill the two dimensional

16 16 Kimmo Berg, Mitri Kitti space when the contraction is 1/2, i.e., the upper bound is 2 when δ = 1/2. Sierpinski game corresponds to a multigraph with three nodes, which can be repeated in any order. An example of such game is given in Berg and Kitti (2011), which has three Nash equilibria. We can see that the path dimensions of games 1-2 and 5-6 are higher than the Sierpinski game and close to the upper bound of 2 2 games when δ Sensitivity analysis The numerical examples of the previous section illustrate the differences of the payoff set and path dimensions, and their relation to the discount factor. Prisoner s dilemma with a low discount factor is a good example of a case where the elementary paths increase but both of the dimensions remain the same. First, it is possible to play d, and when the discount factor increases then (bc) and a are available, but both of the dimensions remain at zero. This is because there is no state in the multigraph which has more than one cycle. When the first dual cycle appears, the dimension jumps up from zero. Games 3-4 and 8-9, with discount factor δ < 1/2, are good examples of a case where the paths and the multigraph remains the same but the dimensions increase when the discount factor is increased. These games also illustrate the difference of path and payoff set dimensions. When δ is more than half but less than the calculated limits, the payoff set dimension is one but the path dimension increases. The increase in path dimension can be interpreted so that multiple paths give the same payoff value. The path dimension depends on two things: the cycles in the multigraph and the discounting. If the cycles do not change and there is a dual cycle, then the dimension increases continuously with the discount factor. But when a new cycle appears in the multigraph, the dimension jumps up discontinuously. For example, the dimension jumps up from zero to 1.39 in the no conflict game when δ = 0.5. The payoff set changes dramatically from one point to the fractal, which is shown in Fig. 3. Mailath et al. (2002) observe that the maximum payoff in prisoner s dilemma may be decreasing in the discount factor. Let us examine the game with (R,S,T,P)= (2, 1, 3, 0) (Mailath and Samuelson 2006)[Sec ]. The payoff sets are illustrated in Fig. 4a) for three discount factors: δ = 0.35 is shown by the plus signs (+), δ = 0.4 by the crosses (x) and δ = 0.5 by the dots ( ). We can see that the maximum payoff, which is around 2.5, keeps decreasing. The path that gives the payoffs is ca. As the discount factor increases, the payoff point moves on the line from c, (3,-1), towards a, (2,2). This is because the payoff mapping becomes less contractive and the part a has more weight in the average discounted payoff. Salonen and Vartiainen (2008) find that there are discontinuities on the border of the payoff set. We can see this clearly in some games, like prisoner s dilemma and chicken, where the Pareto efficient frontier contains large holes. This can be explained by the fractal nature of payoffs. The payoff set consists

17 Equilibria in 2 2 supergames a) δ = 0.35(+),0.4(x),0.5( ) b) δ 1 = 0.57 and δ 2 = 0.53 Fig. 4 Payoff sets in prisoner s dilemma games with different discount factors. of disconnected points, when the discount factor is low, and when the discount factor is increased, the payoff set starts to fill. There may, however, be parts of the payoff space that remain sparse for very high discount factors. These sparse parts depend on the underlying stage game. For example, in the leader game of the Section 4.2, the Pareto frontier between b and c fills when δ = 1/2, but there is no single payoff point below this line when δ < Thus, the triangle of feasible payoffs, i.e., the space between points (6,6), (7,6) and (6,7), fills very slowly. It is also possible to examine games with unequal discount factors; see, e.g., Lehrer and Pauzner (1999). The payoff set of prisoner s dilemma in Section 4.2 is illustrated in Fig. 4b), where the discount factors are δ 1 = 0.57 and δ 2 = We can see that the payoff set is a bit tilted to one side and it is more sparse on the southern side of the almost symmetric fractal. This is because some of the actions of player 2 are not possible as player 2 is less patient. Finally, we note that the payoff values may affect the equilibria. This means that we cannot say that all prisoner s dilemma behave the exact same way, even though there are some similarities. For example, if actions b and c give low payoffs, then the payoff set is mostly between the line from d to a. On the other hand, if b and c give high payoffs and a gives a low payoff, then the payoff set is above d but below the line between b and c. Moreover, some payoff values do not have any effect on the equilibria. For example, in battle of the sexes the value P and in coordination the value of S have no effect on the SPE paths.

18 18 Kimmo Berg, Mitri Kitti 5 Discussion This paper provides new methods for analyzing and computing the subgame perfect pure strategy equilibrium paths and payoff sets in discounted supergames with perfect monitoring. Berg and Kitti (2011) present the underlying theory following the tradition of Abreu et al. (1990) for recursive characterization of equilibrium payoffs. This paper gives a simple presentation of the key ideas using only paths of action profiles. We apply the algorithms to the symmetric 2 2 games. However, the methods can also be used in analyzing asymmetric games where there are several players with more than two actions. We have shown that the SPE paths can be conveniently represented with a directed multigraph. It allows us to analyze what kind of actions are possible in the game, how complex paths there can be, and what happens in the game when the discount factor changes. The multigraph also offers a unique view of the payoff sets, which are particular fractals. Moreover, it turns out that there are useful tools for analyzing the multigraphs, i.e., the graph directed constructions (Mauldin and Williams 1988). There are couple of observations we want to emphasize. One is the difference of paths and the payoff set dimension. When the discount factor is low and the payoff set is sparse, then the SPE paths may increase without changing the payoff set dimension. This happens when the multigraph does not have a state with multiple cycles, which are related to the dimension. It is also possible that the paths remain the same but the dimension increases. Thus, the dimension increases for two reasons: i) the discount factor increases which makes the mappings on the cycles less contractive, and ii) the number of cycles increases. Moreover, it is difficult to estimate the exact payoff set dimension when the payoff points overlap. We also observe that the payoff sets fill up uniquely for different games and for different parts of the payoff space. This gives a new insight into folk theorems, see, e.g., Fudenberg and Maskin (1986). For example, it may require very high discount factor to fill the Pareto efficient frontier of the game. The payoff set may also remain sparse in other parts of the payoff space. References Abreu, D. (1988). On the theory of infinitely repeated games with discounting. Econometrica, 56(2), Abreu, D., Rubinstein, A. (1988). The structure of Nash equilibrium in repeated games with finite automata. Econometrica, 56(6), Abreu, D., Pearce, D., Stacchetti, E. (1986). Optimal cartel equilibria with imperfect monitoring. Journal of Economic Theory, 39(1), Abreu, D., Pearce, D., Stacchetti, E. (1990). Toward a theory of discounted repeated games with imperfect monitoring. Econometrica, 58(5), Axelrod, R. (1984). The Evolution of Cooperation. New York: Basic.

19 Equilibria in 2 2 supergames 19 Benoit, J.P., Krishna, V. (1985). Finitely repeated games. Econometrica, 53(4), Berg, K., Kitti, M. (2011). Equilibrium paths in discounted supergames. Working paper. Brams, S.J. (2003). Negotiation Games: Applying game theory to bargaining and arbitration. Routledge. Cronshaw, M.B. (1997). Algorithms for finding repeated game equilibria. Computational Economics, 10, Cronshaw, M.B., Luenberger, D.G. (1994). Strongly symmetric subgame perfect equilibria in infinitely repeated games with perfect monitoring. Games and Economic Behavior, 6, Edgar, G.A., Golds, J. (1999). A fractal dimension estimate for a graphdirected iterated function system of non-similarities. Indiana University Mathematics Journal, 48(2), Edgar, G.A., Mauldin, R.D. (1992). Multifractal decompositions of digraph recursive fractals. Proceedings of the London Mathematical Society, 65, Falconer, K.J. (1988). The Hausdorff dimension of self-affine fractals. Mathematical Proceedings of the Cambridge Philosophical Society, 103, Falconer, K.J. (1992). The dimension of self-affine fractals ii. Mathematical Proceedings of the Cambridge Philosophical Society, 111, Fudenberg, D., Maskin, E. (1986). The folk theorem in repeated games with discounting and incomplete information. Econometrica, 54, Hauert, C. (2001). Fundamental clusters in 2 2 spatial games. Proceedings of the Royal Society B, 268, Judd, K., Yeltekin, Ş., Conklin, J. (2003). Computing supergame equilibria. Econometrica, 71, Kilgour, D.M., Fraser, N.M. (1988). A taxonomy of all ordinal 2x2 games. Theory and Decision, 24, Kitti, M. (2010). Quasi-stationary equilibria in dynamic games. Working paper. Kitti, M. (2011). Quasi-markov equilibria in discounted dynamic games. Working paper. Lehrer, E., Pauzner, A. (1999). Repeated games with differential time preferences. Econometrica, 67, Mailath, G.J., Samuelson, L. (2006). Repeated Games and Reputations: Long- Run Relationships. Oxford University Press. Mailath, G.J., Obara, I., Sekiguchi, T. (2002). The maximum efficient equilibrium payoff in the repeated prisoners dilemma. Games and Economic Behavior, 40, Mauldin, R.D., Williams, S.C. (1988). Hausdorff dimension in graph directed constructions. Transactions of the American Mathematical Society, 309(2), Maynard Smith, J. (1982). Evolution and the theory of games. Cambridge University Press.

20 20 Kimmo Berg, Mitri Kitti Ngai, S.M., Wang, Y. (2001). Hausdorff dimension of self-similar sets with overlaps. Journal of the London Mathematicl Society, 63, Rapoport, A., Guyer, M. (1966). A taxonomy of 2 2 games. General Systems: Yearbook of the Society for General Systems Research, 11, Rellick, L.M., Edgar, G.A., Klapper, M.H. (1991). Calculating the Hausdorff dimension of tree structures. Journal of Statistical Physics, 64(1), Robinson, D., Goforth, D. (2005). The Topology of the 2 2 Games: A New Periodic Table. Routledge. Rubinstein, A. (1986). Finite automata play the repeated prisoner s dilemma. Journal of Economic Theory, 39, Salonen, H., Vartiainen, H. (2008). Valuating payoff streams under unequal discount factors. Economics Letters, 99(3), Tarjan, R. (1972). Depth-first search and linear graph algorithms. SIAM Journal on Computing, 1(2), Walliser, B. (1988). A simplified taxonomy of 2 2 games. Theory and Decision, 25,

Mixed-Strategy Subgame-Perfect Equilibria in Repeated Games

Mixed-Strategy Subgame-Perfect Equilibria in Repeated Games Mixed-Strategy Subgame-Perfect Equilibria in Repeated Games Kimmo Berg Department of Mathematics and Systems Analysis Aalto University, Finland (joint with Gijs Schoenmakers) July 8, 2014 Outline of the

More information

Repeated Games. September 3, Definitions: Discounting, Individual Rationality. Finitely Repeated Games. Infinitely Repeated Games

Repeated Games. September 3, Definitions: Discounting, Individual Rationality. Finitely Repeated Games. Infinitely Repeated Games Repeated Games Frédéric KOESSLER September 3, 2007 1/ Definitions: Discounting, Individual Rationality Finitely Repeated Games Infinitely Repeated Games Automaton Representation of Strategies The One-Shot

More information

Construction of Subgame-Perfect Mixed-Strategy Equilibria in Repeated Games

Construction of Subgame-Perfect Mixed-Strategy Equilibria in Repeated Games games Article Construction of Subgame-Perfect Mixed-Strategy Equilibria in Repeated Games Kimmo Berg 1, * ID and Gijs Schoenmakers 2 1 Department of Mathematics and Systems Analysis, Aalto University School

More information

Repeated Games with Perfect Monitoring

Repeated Games with Perfect Monitoring Repeated Games with Perfect Monitoring Mihai Manea MIT Repeated Games normal-form stage game G = (N, A, u) players simultaneously play game G at time t = 0, 1,... at each date t, players observe all past

More information

Game Theory. Wolfgang Frimmel. Repeated Games

Game Theory. Wolfgang Frimmel. Repeated Games Game Theory Wolfgang Frimmel Repeated Games 1 / 41 Recap: SPNE The solution concept for dynamic games with complete information is the subgame perfect Nash Equilibrium (SPNE) Selten (1965): A strategy

More information

Introduction to Game Theory Lecture Note 5: Repeated Games

Introduction to Game Theory Lecture Note 5: Repeated Games Introduction to Game Theory Lecture Note 5: Repeated Games Haifeng Huang University of California, Merced Repeated games Repeated games: given a simultaneous-move game G, a repeated game of G is an extensive

More information

Discounted Stochastic Games with Voluntary Transfers

Discounted Stochastic Games with Voluntary Transfers Discounted Stochastic Games with Voluntary Transfers Sebastian Kranz University of Cologne Slides Discounted Stochastic Games Natural generalization of infinitely repeated games n players infinitely many

More information

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015. FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.) Hints for Problem Set 3 1. Consider the following strategic

More information

Repeated Games. EC202 Lectures IX & X. Francesco Nava. January London School of Economics. Nava (LSE) EC202 Lectures IX & X Jan / 16

Repeated Games. EC202 Lectures IX & X. Francesco Nava. January London School of Economics. Nava (LSE) EC202 Lectures IX & X Jan / 16 Repeated Games EC202 Lectures IX & X Francesco Nava London School of Economics January 2011 Nava (LSE) EC202 Lectures IX & X Jan 2011 1 / 16 Summary Repeated Games: Definitions: Feasible Payoffs Minmax

More information

Game Theory Fall 2003

Game Theory Fall 2003 Game Theory Fall 2003 Problem Set 5 [1] Consider an infinitely repeated game with a finite number of actions for each player and a common discount factor δ. Prove that if δ is close enough to zero then

More information

Game Theory: Normal Form Games

Game Theory: Normal Form Games Game Theory: Normal Form Games Michael Levet June 23, 2016 1 Introduction Game Theory is a mathematical field that studies how rational agents make decisions in both competitive and cooperative situations.

More information

G5212: Game Theory. Mark Dean. Spring 2017

G5212: Game Theory. Mark Dean. Spring 2017 G5212: Game Theory Mark Dean Spring 2017 Bargaining We will now apply the concept of SPNE to bargaining A bit of background Bargaining is hugely interesting but complicated to model It turns out that the

More information

CHAPTER 14: REPEATED PRISONER S DILEMMA

CHAPTER 14: REPEATED PRISONER S DILEMMA CHAPTER 4: REPEATED PRISONER S DILEMMA In this chapter, we consider infinitely repeated play of the Prisoner s Dilemma game. We denote the possible actions for P i by C i for cooperating with the other

More information

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India October 2012

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India October 2012 Game Theory Lecture Notes By Y. Narahari Department of Computer Science and Automation Indian Institute of Science Bangalore, India October 22 COOPERATIVE GAME THEORY Correlated Strategies and Correlated

More information

Lecture 5 Leadership and Reputation

Lecture 5 Leadership and Reputation Lecture 5 Leadership and Reputation Reputations arise in situations where there is an element of repetition, and also where coordination between players is possible. One definition of leadership is that

More information

February 23, An Application in Industrial Organization

February 23, An Application in Industrial Organization An Application in Industrial Organization February 23, 2015 One form of collusive behavior among firms is to restrict output in order to keep the price of the product high. This is a goal of the OPEC oil

More information

Week 8: Basic concepts in game theory

Week 8: Basic concepts in game theory Week 8: Basic concepts in game theory Part 1: Examples of games We introduce here the basic objects involved in game theory. To specify a game ones gives The players. The set of all possible strategies

More information

Regret Minimization and Security Strategies

Regret Minimization and Security Strategies Chapter 5 Regret Minimization and Security Strategies Until now we implicitly adopted a view that a Nash equilibrium is a desirable outcome of a strategic game. In this chapter we consider two alternative

More information

GAME THEORY. Department of Economics, MIT, Follow Muhamet s slides. We need the following result for future reference.

GAME THEORY. Department of Economics, MIT, Follow Muhamet s slides. We need the following result for future reference. 14.126 GAME THEORY MIHAI MANEA Department of Economics, MIT, 1. Existence and Continuity of Nash Equilibria Follow Muhamet s slides. We need the following result for future reference. Theorem 1. Suppose

More information

REPEATED GAMES. MICROECONOMICS Principles and Analysis Frank Cowell. Frank Cowell: Repeated Games. Almost essential Game Theory: Dynamic.

REPEATED GAMES. MICROECONOMICS Principles and Analysis Frank Cowell. Frank Cowell: Repeated Games. Almost essential Game Theory: Dynamic. Prerequisites Almost essential Game Theory: Dynamic REPEATED GAMES MICROECONOMICS Principles and Analysis Frank Cowell April 2018 1 Overview Repeated Games Basic structure Embedding the game in context

More information

Strategies and Nash Equilibrium. A Whirlwind Tour of Game Theory

Strategies and Nash Equilibrium. A Whirlwind Tour of Game Theory Strategies and Nash Equilibrium A Whirlwind Tour of Game Theory (Mostly from Fudenberg & Tirole) Players choose actions, receive rewards based on their own actions and those of the other players. Example,

More information

Economics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Repeated games OR 8 and 9, and FT 5

Economics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Repeated games OR 8 and 9, and FT 5 Economics 209A Theory and Application of Non-Cooperative Games (Fall 2013) Repeated games OR 8 and 9, and FT 5 The basic idea prisoner s dilemma The prisoner s dilemma game with one-shot payoffs 2 2 0

More information

Topics in Contract Theory Lecture 1

Topics in Contract Theory Lecture 1 Leonardo Felli 7 January, 2002 Topics in Contract Theory Lecture 1 Contract Theory has become only recently a subfield of Economics. As the name suggest the main object of the analysis is a contract. Therefore

More information

In the Name of God. Sharif University of Technology. Graduate School of Management and Economics

In the Name of God. Sharif University of Technology. Graduate School of Management and Economics In the Name of God Sharif University of Technology Graduate School of Management and Economics Microeconomics (for MBA students) 44111 (1393-94 1 st term) - Group 2 Dr. S. Farshad Fatemi Game Theory Game:

More information

Advanced Microeconomics

Advanced Microeconomics Advanced Microeconomics ECON5200 - Fall 2014 Introduction What you have done: - consumers maximize their utility subject to budget constraints and firms maximize their profits given technology and market

More information

Week 8: Basic concepts in game theory

Week 8: Basic concepts in game theory Week 8: Basic concepts in game theory Part 1: Examples of games We introduce here the basic objects involved in game theory. To specify a game ones gives The players. The set of all possible strategies

More information

Finite Memory and Imperfect Monitoring

Finite Memory and Imperfect Monitoring Federal Reserve Bank of Minneapolis Research Department Finite Memory and Imperfect Monitoring Harold L. Cole and Narayana Kocherlakota Working Paper 604 September 2000 Cole: U.C.L.A. and Federal Reserve

More information

The folk theorem revisited

The folk theorem revisited Economic Theory 27, 321 332 (2006) DOI: 10.1007/s00199-004-0580-7 The folk theorem revisited James Bergin Department of Economics, Queen s University, Ontario K7L 3N6, CANADA (e-mail: berginj@qed.econ.queensu.ca)

More information

Essays on Some Combinatorial Optimization Problems with Interval Data

Essays on Some Combinatorial Optimization Problems with Interval Data Essays on Some Combinatorial Optimization Problems with Interval Data a thesis submitted to the department of industrial engineering and the institute of engineering and sciences of bilkent university

More information

Stochastic Games and Bayesian Games

Stochastic Games and Bayesian Games Stochastic Games and Bayesian Games CPSC 532l Lecture 10 Stochastic Games and Bayesian Games CPSC 532l Lecture 10, Slide 1 Lecture Overview 1 Recap 2 Stochastic Games 3 Bayesian Games 4 Analyzing Bayesian

More information

Microeconomics of Banking: Lecture 5

Microeconomics of Banking: Lecture 5 Microeconomics of Banking: Lecture 5 Prof. Ronaldo CARPIO Oct. 23, 2015 Administrative Stuff Homework 2 is due next week. Due to the change in material covered, I have decided to change the grading system

More information

Game Theory for Wireless Engineers Chapter 3, 4

Game Theory for Wireless Engineers Chapter 3, 4 Game Theory for Wireless Engineers Chapter 3, 4 Zhongliang Liang ECE@Mcmaster Univ October 8, 2009 Outline Chapter 3 - Strategic Form Games - 3.1 Definition of A Strategic Form Game - 3.2 Dominated Strategies

More information

Agent-Based Simulation of N-Person Games with Crossing Payoff Functions

Agent-Based Simulation of N-Person Games with Crossing Payoff Functions Agent-Based Simulation of N-Person Games with Crossing Payoff Functions Miklos N. Szilagyi Iren Somogyi Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ 85721 We report

More information

Warm Up Finitely Repeated Games Infinitely Repeated Games Bayesian Games. Repeated Games

Warm Up Finitely Repeated Games Infinitely Repeated Games Bayesian Games. Repeated Games Repeated Games Warm up: bargaining Suppose you and your Qatz.com partner have a falling-out. You agree set up two meetings to negotiate a way to split the value of your assets, which amount to $1 million

More information

Stochastic Games and Bayesian Games

Stochastic Games and Bayesian Games Stochastic Games and Bayesian Games CPSC 532L Lecture 10 Stochastic Games and Bayesian Games CPSC 532L Lecture 10, Slide 1 Lecture Overview 1 Recap 2 Stochastic Games 3 Bayesian Games Stochastic Games

More information

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India August 2012

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India August 2012 Game Theory Lecture Notes By Y. Narahari Department of Computer Science and Automation Indian Institute of Science Bangalore, India August 2012 Chapter 6: Mixed Strategies and Mixed Strategy Nash Equilibrium

More information

Best-Reply Sets. Jonathan Weinstein Washington University in St. Louis. This version: May 2015

Best-Reply Sets. Jonathan Weinstein Washington University in St. Louis. This version: May 2015 Best-Reply Sets Jonathan Weinstein Washington University in St. Louis This version: May 2015 Introduction The best-reply correspondence of a game the mapping from beliefs over one s opponents actions to

More information

Duopoly models Multistage games with observed actions Subgame perfect equilibrium Extensive form of a game Two-stage prisoner s dilemma

Duopoly models Multistage games with observed actions Subgame perfect equilibrium Extensive form of a game Two-stage prisoner s dilemma Recap Last class (September 20, 2016) Duopoly models Multistage games with observed actions Subgame perfect equilibrium Extensive form of a game Two-stage prisoner s dilemma Today (October 13, 2016) Finitely

More information

Introductory Microeconomics

Introductory Microeconomics Prof. Wolfram Elsner Faculty of Business Studies and Economics iino Institute of Institutional and Innovation Economics Introductory Microeconomics More Formal Concepts of Game Theory and Evolutionary

More information

IPR Protection in the High-Tech Industries: A Model of Piracy. Thierry Rayna University of Bristol

IPR Protection in the High-Tech Industries: A Model of Piracy. Thierry Rayna University of Bristol IPR Protection in the High-Tech Industries: A Model of Piracy Thierry Rayna University of Bristol thierry.rayna@bris.ac.uk Digital Goods Are Public, Aren t They? For digital goods to be non-rival, copy

More information

Renegotiation in Repeated Games with Side-Payments 1

Renegotiation in Repeated Games with Side-Payments 1 Games and Economic Behavior 33, 159 176 (2000) doi:10.1006/game.1999.0769, available online at http://www.idealibrary.com on Renegotiation in Repeated Games with Side-Payments 1 Sandeep Baliga Kellogg

More information

Relational Incentive Contracts

Relational Incentive Contracts Relational Incentive Contracts Jonathan Levin May 2006 These notes consider Levin s (2003) paper on relational incentive contracts, which studies how self-enforcing contracts can provide incentives in

More information

Early PD experiments

Early PD experiments REPEATED GAMES 1 Early PD experiments In 1950, Merrill Flood and Melvin Dresher (at RAND) devised an experiment to test Nash s theory about defection in a two-person prisoners dilemma. Experimental Design

More information

In reality; some cases of prisoner s dilemma end in cooperation. Game Theory Dr. F. Fatemi Page 219

In reality; some cases of prisoner s dilemma end in cooperation. Game Theory Dr. F. Fatemi Page 219 Repeated Games Basic lesson of prisoner s dilemma: In one-shot interaction, individual s have incentive to behave opportunistically Leads to socially inefficient outcomes In reality; some cases of prisoner

More information

ECE 586BH: Problem Set 5: Problems and Solutions Multistage games, including repeated games, with observed moves

ECE 586BH: Problem Set 5: Problems and Solutions Multistage games, including repeated games, with observed moves University of Illinois Spring 01 ECE 586BH: Problem Set 5: Problems and Solutions Multistage games, including repeated games, with observed moves Due: Reading: Thursday, April 11 at beginning of class

More information

Repeated Games. Econ 400. University of Notre Dame. Econ 400 (ND) Repeated Games 1 / 48

Repeated Games. Econ 400. University of Notre Dame. Econ 400 (ND) Repeated Games 1 / 48 Repeated Games Econ 400 University of Notre Dame Econ 400 (ND) Repeated Games 1 / 48 Relationships and Long-Lived Institutions Business (and personal) relationships: Being caught cheating leads to punishment

More information

Infinitely Repeated Games

Infinitely Repeated Games February 10 Infinitely Repeated Games Recall the following theorem Theorem 72 If a game has a unique Nash equilibrium, then its finite repetition has a unique SPNE. Our intuition, however, is that long-term

More information

Economics 171: Final Exam

Economics 171: Final Exam Question 1: Basic Concepts (20 points) Economics 171: Final Exam 1. Is it true that every strategy is either strictly dominated or is a dominant strategy? Explain. (5) No, some strategies are neither dominated

More information

TR : Knowledge-Based Rational Decisions and Nash Paths

TR : Knowledge-Based Rational Decisions and Nash Paths City University of New York (CUNY) CUNY Academic Works Computer Science Technical Reports Graduate Center 2009 TR-2009015: Knowledge-Based Rational Decisions and Nash Paths Sergei Artemov Follow this and

More information

Rational Behaviour and Strategy Construction in Infinite Multiplayer Games

Rational Behaviour and Strategy Construction in Infinite Multiplayer Games Rational Behaviour and Strategy Construction in Infinite Multiplayer Games Michael Ummels ummels@logic.rwth-aachen.de FSTTCS 2006 Michael Ummels Rational Behaviour and Strategy Construction 1 / 15 Infinite

More information

MA200.2 Game Theory II, LSE

MA200.2 Game Theory II, LSE MA200.2 Game Theory II, LSE Problem Set 1 These questions will go over basic game-theoretic concepts and some applications. homework is due during class on week 4. This [1] In this problem (see Fudenberg-Tirole

More information

6.254 : Game Theory with Engineering Applications Lecture 3: Strategic Form Games - Solution Concepts

6.254 : Game Theory with Engineering Applications Lecture 3: Strategic Form Games - Solution Concepts 6.254 : Game Theory with Engineering Applications Lecture 3: Strategic Form Games - Solution Concepts Asu Ozdaglar MIT February 9, 2010 1 Introduction Outline Review Examples of Pure Strategy Nash Equilibria

More information

Long run equilibria in an asymmetric oligopoly

Long run equilibria in an asymmetric oligopoly Economic Theory 14, 705 715 (1999) Long run equilibria in an asymmetric oligopoly Yasuhito Tanaka Faculty of Law, Chuo University, 742-1, Higashinakano, Hachioji, Tokyo, 192-03, JAPAN (e-mail: yasuhito@tamacc.chuo-u.ac.jp)

More information

CS 798: Homework Assignment 4 (Game Theory)

CS 798: Homework Assignment 4 (Game Theory) 0 5 CS 798: Homework Assignment 4 (Game Theory) 1.0 Preferences Assigned: October 28, 2009 Suppose that you equally like a banana and a lottery that gives you an apple 30% of the time and a carrot 70%

More information

Algorithms and Networking for Computer Games

Algorithms and Networking for Computer Games Algorithms and Networking for Computer Games Chapter 4: Game Trees http://www.wiley.com/go/smed Game types perfect information games no hidden information two-player, perfect information games Noughts

More information

Finite Memory and Imperfect Monitoring

Finite Memory and Imperfect Monitoring Federal Reserve Bank of Minneapolis Research Department Staff Report 287 March 2001 Finite Memory and Imperfect Monitoring Harold L. Cole University of California, Los Angeles and Federal Reserve Bank

More information

On Existence of Equilibria. Bayesian Allocation-Mechanisms

On Existence of Equilibria. Bayesian Allocation-Mechanisms On Existence of Equilibria in Bayesian Allocation Mechanisms Northwestern University April 23, 2014 Bayesian Allocation Mechanisms In allocation mechanisms, agents choose messages. The messages determine

More information

On Forchheimer s Model of Dominant Firm Price Leadership

On Forchheimer s Model of Dominant Firm Price Leadership On Forchheimer s Model of Dominant Firm Price Leadership Attila Tasnádi Department of Mathematics, Budapest University of Economic Sciences and Public Administration, H-1093 Budapest, Fővám tér 8, Hungary

More information

Chapter 2 Strategic Dominance

Chapter 2 Strategic Dominance Chapter 2 Strategic Dominance 2.1 Prisoner s Dilemma Let us start with perhaps the most famous example in Game Theory, the Prisoner s Dilemma. 1 This is a two-player normal-form (simultaneous move) game.

More information

A brief introduction to evolutionary game theory

A brief introduction to evolutionary game theory A brief introduction to evolutionary game theory Thomas Brihaye UMONS 27 October 2015 Outline 1 An example, three points of view 2 A brief review of strategic games Nash equilibrium et al Symmetric two-player

More information

An introduction on game theory for wireless networking [1]

An introduction on game theory for wireless networking [1] An introduction on game theory for wireless networking [1] Ning Zhang 14 May, 2012 [1] Game Theory in Wireless Networks: A Tutorial 1 Roadmap 1 Introduction 2 Static games 3 Extensive-form games 4 Summary

More information

Introduction to Game Theory

Introduction to Game Theory Introduction to Game Theory What is a Game? A game is a formal representation of a situation in which a number of individuals interact in a setting of strategic interdependence. By that, we mean that each

More information

Repeated Games. Olivier Gossner and Tristan Tomala. December 6, 2007

Repeated Games. Olivier Gossner and Tristan Tomala. December 6, 2007 Repeated Games Olivier Gossner and Tristan Tomala December 6, 2007 1 The subject and its importance Repeated interactions arise in several domains such as Economics, Computer Science, and Biology. The

More information

January 26,

January 26, January 26, 2015 Exercise 9 7.c.1, 7.d.1, 7.d.2, 8.b.1, 8.b.2, 8.b.3, 8.b.4,8.b.5, 8.d.1, 8.d.2 Example 10 There are two divisions of a firm (1 and 2) that would benefit from a research project conducted

More information

Subgame Perfect Cooperation in an Extensive Game

Subgame Perfect Cooperation in an Extensive Game Subgame Perfect Cooperation in an Extensive Game Parkash Chander * and Myrna Wooders May 1, 2011 Abstract We propose a new concept of core for games in extensive form and label it the γ-core of an extensive

More information

A Theory of Value Distribution in Social Exchange Networks

A Theory of Value Distribution in Social Exchange Networks A Theory of Value Distribution in Social Exchange Networks Kang Rong, Qianfeng Tang School of Economics, Shanghai University of Finance and Economics, Shanghai 00433, China Key Laboratory of Mathematical

More information

ECONS 424 STRATEGY AND GAME THEORY MIDTERM EXAM #2 ANSWER KEY

ECONS 424 STRATEGY AND GAME THEORY MIDTERM EXAM #2 ANSWER KEY ECONS 44 STRATEGY AND GAE THEORY IDTER EXA # ANSWER KEY Exercise #1. Hawk-Dove game. Consider the following payoff matrix representing the Hawk-Dove game. Intuitively, Players 1 and compete for a resource,

More information

Chapter 10: Mixed strategies Nash equilibria, reaction curves and the equality of payoffs theorem

Chapter 10: Mixed strategies Nash equilibria, reaction curves and the equality of payoffs theorem Chapter 10: Mixed strategies Nash equilibria reaction curves and the equality of payoffs theorem Nash equilibrium: The concept of Nash equilibrium can be extended in a natural manner to the mixed strategies

More information

Basic Game-Theoretic Concepts. Game in strategic form has following elements. Player set N. (Pure) strategy set for player i, S i.

Basic Game-Theoretic Concepts. Game in strategic form has following elements. Player set N. (Pure) strategy set for player i, S i. Basic Game-Theoretic Concepts Game in strategic form has following elements Player set N (Pure) strategy set for player i, S i. Payoff function f i for player i f i : S R, where S is product of S i s.

More information

Finding Equilibria in Games of No Chance

Finding Equilibria in Games of No Chance Finding Equilibria in Games of No Chance Kristoffer Arnsfelt Hansen, Peter Bro Miltersen, and Troels Bjerre Sørensen Department of Computer Science, University of Aarhus, Denmark {arnsfelt,bromille,trold}@daimi.au.dk

More information

Credible Threats, Reputation and Private Monitoring.

Credible Threats, Reputation and Private Monitoring. Credible Threats, Reputation and Private Monitoring. Olivier Compte First Version: June 2001 This Version: November 2003 Abstract In principal-agent relationships, a termination threat is often thought

More information

Microeconomic Theory II Preliminary Examination Solutions

Microeconomic Theory II Preliminary Examination Solutions Microeconomic Theory II Preliminary Examination Solutions 1. (45 points) Consider the following normal form game played by Bruce and Sheila: L Sheila R T 1, 0 3, 3 Bruce M 1, x 0, 0 B 0, 0 4, 1 (a) Suppose

More information

Optimal selling rules for repeated transactions.

Optimal selling rules for repeated transactions. Optimal selling rules for repeated transactions. Ilan Kremer and Andrzej Skrzypacz March 21, 2002 1 Introduction In many papers considering the sale of many objects in a sequence of auctions the seller

More information

Game-Theoretic Approach to Bank Loan Repayment. Andrzej Paliński

Game-Theoretic Approach to Bank Loan Repayment. Andrzej Paliński Decision Making in Manufacturing and Services Vol. 9 2015 No. 1 pp. 79 88 Game-Theoretic Approach to Bank Loan Repayment Andrzej Paliński Abstract. This paper presents a model of bank-loan repayment as

More information

Iterated Dominance and Nash Equilibrium

Iterated Dominance and Nash Equilibrium Chapter 11 Iterated Dominance and Nash Equilibrium In the previous chapter we examined simultaneous move games in which each player had a dominant strategy; the Prisoner s Dilemma game was one example.

More information

Outline Introduction Game Representations Reductions Solution Concepts. Game Theory. Enrico Franchi. May 19, 2010

Outline Introduction Game Representations Reductions Solution Concepts. Game Theory. Enrico Franchi. May 19, 2010 May 19, 2010 1 Introduction Scope of Agent preferences Utility Functions 2 Game Representations Example: Game-1 Extended Form Strategic Form Equivalences 3 Reductions Best Response Domination 4 Solution

More information

Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets

Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets Nathaniel Hendren October, 2013 Abstract Both Akerlof (1970) and Rothschild and Stiglitz (1976) show that

More information

Best response cycles in perfect information games

Best response cycles in perfect information games P. Jean-Jacques Herings, Arkadi Predtetchinski Best response cycles in perfect information games RM/15/017 Best response cycles in perfect information games P. Jean Jacques Herings and Arkadi Predtetchinski

More information

Prisoner s dilemma with T = 1

Prisoner s dilemma with T = 1 REPEATED GAMES Overview Context: players (e.g., firms) interact with each other on an ongoing basis Concepts: repeated games, grim strategies Economic principle: repetition helps enforcing otherwise unenforceable

More information

Topics in Contract Theory Lecture 3

Topics in Contract Theory Lecture 3 Leonardo Felli 9 January, 2002 Topics in Contract Theory Lecture 3 Consider now a different cause for the failure of the Coase Theorem: the presence of transaction costs. Of course for this to be an interesting

More information

A Theory of Value Distribution in Social Exchange Networks

A Theory of Value Distribution in Social Exchange Networks A Theory of Value Distribution in Social Exchange Networks Kang Rong, Qianfeng Tang School of Economics, Shanghai University of Finance and Economics, Shanghai 00433, China Key Laboratory of Mathematical

More information

Yao s Minimax Principle

Yao s Minimax Principle Complexity of algorithms The complexity of an algorithm is usually measured with respect to the size of the input, where size may for example refer to the length of a binary word describing the input,

More information

The Ohio State University Department of Economics Econ 601 Prof. James Peck Extra Practice Problems Answers (for final)

The Ohio State University Department of Economics Econ 601 Prof. James Peck Extra Practice Problems Answers (for final) The Ohio State University Department of Economics Econ 601 Prof. James Peck Extra Practice Problems Answers (for final) Watson, Chapter 15, Exercise 1(part a). Looking at the final subgame, player 1 must

More information

Economics and Computation

Economics and Computation Economics and Computation ECON 425/563 and CPSC 455/555 Professor Dirk Bergemann and Professor Joan Feigenbaum Reputation Systems In case of any questions and/or remarks on these lecture notes, please

More information

Introduction to game theory LECTURE 2

Introduction to game theory LECTURE 2 Introduction to game theory LECTURE 2 Jörgen Weibull February 4, 2010 Two topics today: 1. Existence of Nash equilibria (Lecture notes Chapter 10 and Appendix A) 2. Relations between equilibrium and rationality

More information

INTERIM CORRELATED RATIONALIZABILITY IN INFINITE GAMES

INTERIM CORRELATED RATIONALIZABILITY IN INFINITE GAMES INTERIM CORRELATED RATIONALIZABILITY IN INFINITE GAMES JONATHAN WEINSTEIN AND MUHAMET YILDIZ A. We show that, under the usual continuity and compactness assumptions, interim correlated rationalizability

More information

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2017

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2017 ECON 459 Game Theory Lecture Notes Auctions Luca Anderlini Spring 2017 These notes have been used and commented on before. If you can still spot any errors or have any suggestions for improvement, please

More information

MATH 121 GAME THEORY REVIEW

MATH 121 GAME THEORY REVIEW MATH 121 GAME THEORY REVIEW ERIN PEARSE Contents 1. Definitions 2 1.1. Non-cooperative Games 2 1.2. Cooperative 2-person Games 4 1.3. Cooperative n-person Games (in coalitional form) 6 2. Theorems and

More information

Chapter 8. Repeated Games. Strategies and payoffs for games played twice

Chapter 8. Repeated Games. Strategies and payoffs for games played twice Chapter 8 epeated Games 1 Strategies and payoffs for games played twice Finitely repeated games Discounted utility and normalized utility Complete plans of play for 2 2 games played twice Trigger strategies

More information

CS 331: Artificial Intelligence Game Theory I. Prisoner s Dilemma

CS 331: Artificial Intelligence Game Theory I. Prisoner s Dilemma CS 331: Artificial Intelligence Game Theory I 1 Prisoner s Dilemma You and your partner have both been caught red handed near the scene of a burglary. Both of you have been brought to the police station,

More information

Finitely repeated simultaneous move game.

Finitely repeated simultaneous move game. Finitely repeated simultaneous move game. Consider a normal form game (simultaneous move game) Γ N which is played repeatedly for a finite (T )number of times. The normal form game which is played repeatedly

More information

On Replicator Dynamics and Evolutionary Games

On Replicator Dynamics and Evolutionary Games Explorations On Replicator Dynamics and Evolutionary Games Joseph D. Krenicky Mathematics Faculty Mentor: Dr. Jan Rychtar Abstract We study the replicator dynamics of two player games. We summarize the

More information

Moral Hazard and Private Monitoring

Moral Hazard and Private Monitoring Moral Hazard and Private Monitoring V. Bhaskar & Eric van Damme This version: April 2000 Abstract We clarify the role of mixed strategies and public randomization (sunspots) in sustaining near-efficient

More information

NASH PROGRAM Abstract: Nash program

NASH PROGRAM Abstract: Nash program NASH PROGRAM by Roberto Serrano Department of Economics, Brown University May 2005 (to appear in The New Palgrave Dictionary of Economics, 2nd edition, McMillan, London) Abstract: This article is a brief

More information

2 Game Theory: Basic Concepts

2 Game Theory: Basic Concepts 2 Game Theory Basic Concepts High-rationality solution concepts in game theory can emerge in a world populated by low-rationality agents. Young (199) The philosophers kick up the dust and then complain

More information

High Frequency Repeated Games with Costly Monitoring

High Frequency Repeated Games with Costly Monitoring High Frequency Repeated Games with Costly Monitoring Ehud Lehrer and Eilon Solan October 25, 2016 Abstract We study two-player discounted repeated games in which a player cannot monitor the other unless

More information

Sublinear Time Algorithms Oct 19, Lecture 1

Sublinear Time Algorithms Oct 19, Lecture 1 0368.416701 Sublinear Time Algorithms Oct 19, 2009 Lecturer: Ronitt Rubinfeld Lecture 1 Scribe: Daniel Shahaf 1 Sublinear-time algorithms: motivation Twenty years ago, there was practically no investigation

More information

A folk theorem for one-shot Bertrand games

A folk theorem for one-shot Bertrand games Economics Letters 6 (999) 9 6 A folk theorem for one-shot Bertrand games Michael R. Baye *, John Morgan a, b a Indiana University, Kelley School of Business, 309 East Tenth St., Bloomington, IN 4740-70,

More information

Microeconomic Theory II Preliminary Examination Solutions Exam date: June 5, 2017

Microeconomic Theory II Preliminary Examination Solutions Exam date: June 5, 2017 Microeconomic Theory II Preliminary Examination Solutions Exam date: June 5, 07. (40 points) Consider a Cournot duopoly. The market price is given by q q, where q and q are the quantities of output produced

More information

Signaling Games. Farhad Ghassemi

Signaling Games. Farhad Ghassemi Signaling Games Farhad Ghassemi Abstract - We give an overview of signaling games and their relevant solution concept, perfect Bayesian equilibrium. We introduce an example of signaling games and analyze

More information