Game Theory for Santa Fe

Similar documents
Basic Game-Theoretic Concepts. Game in strategic form has following elements. Player set N. (Pure) strategy set for player i, S i.

Microeconomics II. CIDE, MsC Economics. List of Problems

January 26,

Economics 171: Final Exam

Strategies and Nash Equilibrium. A Whirlwind Tour of Game Theory

PAULI MURTO, ANDREY ZHUKOV

Stochastic Games and Bayesian Games

Advanced Microeconomics

Sequential Rationality and Weak Perfect Bayesian Equilibrium

Microeconomic Theory II Preliminary Examination Solutions

6.207/14.15: Networks Lecture 9: Introduction to Game Theory 1

6.254 : Game Theory with Engineering Applications Lecture 3: Strategic Form Games - Solution Concepts

GAME THEORY. Department of Economics, MIT, Follow Muhamet s slides. We need the following result for future reference.

Extensive-Form Games with Imperfect Information

Best-Reply Sets. Jonathan Weinstein Washington University in St. Louis. This version: May 2015

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

An introduction on game theory for wireless networking [1]

CHAPTER 9 Nash Equilibrium 1-1

Sequential-move games with Nature s moves.

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India October 2012

Finitely repeated simultaneous move game.

In the Name of God. Sharif University of Technology. Graduate School of Management and Economics

Game Theory: Normal Form Games

Web Appendix: Proofs and extensions.

Comparing Allocations under Asymmetric Information: Coase Theorem Revisited

UC Berkeley Haas School of Business Game Theory (EMBA 296 & EWMBA 211) Summer 2016

Rationalizable Strategies

On Existence of Equilibria. Bayesian Allocation-Mechanisms

GAME THEORY: DYNAMIC. MICROECONOMICS Principles and Analysis Frank Cowell. Frank Cowell: Dynamic Game Theory

HW Consider the following game:

Game Theory Fall 2003

G5212: Game Theory. Mark Dean. Spring 2017

Game Theory for Wireless Engineers Chapter 3, 4

Lecture 5 Leadership and Reputation

Introduction to Multi-Agent Programming

Iterated Dominance and Nash Equilibrium

Preliminary Notions in Game Theory

Evaluating Strategic Forecasters. Rahul Deb with Mallesh Pai (Rice) and Maher Said (NYU Stern) Becker Friedman Theory Conference III July 22, 2017

In the Name of God. Sharif University of Technology. Microeconomics 2. Graduate School of Management and Economics. Dr. S.

Introduction to Industrial Organization Professor: Caixia Shen Fall 2014 Lecture Note 5 Games and Strategy (Ch. 4)

Not 0,4 2,1. i. Show there is a perfect Bayesian equilibrium where player A chooses to play, player A chooses L, and player B chooses L.

Regret Minimization and Security Strategies

Outline Introduction Game Representations Reductions Solution Concepts. Game Theory. Enrico Franchi. May 19, 2010

Yao s Minimax Principle

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2017

Introduction to Game Theory

Microeconomic Theory August 2013 Applied Economics. Ph.D. PRELIMINARY EXAMINATION MICROECONOMIC THEORY. Applied Economics Graduate Program

4. Beliefs at all info sets off the equilibrium path are determined by Bayes' Rule & the players' equilibrium strategies where possible.

MS&E 246: Lecture 5 Efficiency and fairness. Ramesh Johari

Introduction to Game Theory

Microeconomics of Banking: Lecture 5

Introduction to game theory LECTURE 2

6.207/14.15: Networks Lecture 9: Introduction to Game Theory 1

Answers to Problem Set 4

Econ 618 Simultaneous Move Bayesian Games

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program August 2017

6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2

PAULI MURTO, ANDREY ZHUKOV. If any mistakes or typos are spotted, kindly communicate them to

Repeated Games with Perfect Monitoring

Microeconomic Theory May 2013 Applied Economics. Ph.D. PRELIMINARY EXAMINATION MICROECONOMIC THEORY. Applied Economics Graduate Program.

Introductory Microeconomics

CS 7180: Behavioral Modeling and Decision- making in AI

Game Theory. Wolfgang Frimmel. Repeated Games

Week 8: Basic concepts in game theory

Topics in Contract Theory Lecture 1

Signaling Games. Farhad Ghassemi

Lecture 3 Representation of Games

MA200.2 Game Theory II, LSE

ECE 586BH: Problem Set 5: Problems and Solutions Multistage games, including repeated games, with observed moves

Stochastic Games and Bayesian Games

Beliefs and Sequential Rationality

MA200.2 Game Theory II, LSE

Advanced Micro 1 Lecture 14: Dynamic Games Equilibrium Concepts

Finite Memory and Imperfect Monitoring

Exercises Solutions: Game Theory

10.1 Elimination of strictly dominated strategies

Chapter 10: Mixed strategies Nash equilibria, reaction curves and the equality of payoffs theorem

Lecture Notes on Adverse Selection and Signaling

Repeated Games. September 3, Definitions: Discounting, Individual Rationality. Finitely Repeated Games. Infinitely Repeated Games

Finish what s been left... CS286r Fall 08 Finish what s been left... 1

Uncertainty in Equilibrium

March 30, Why do economists (and increasingly, engineers and computer scientists) study auctions?

Name. Answers Discussion Final Exam, Econ 171, March, 2012

ECON 803: MICROECONOMIC THEORY II Arthur J. Robson Fall 2016 Assignment 9 (due in class on November 22)

When does strategic information disclosure lead to perfect consumer information?

Chapter 6. Game Theory

G5212: Game Theory. Mark Dean. Spring 2017

Economics 109 Practice Problems 1, Vincent Crawford, Spring 2002

m 11 m 12 Non-Zero Sum Games Matrix Form of Zero-Sum Games R&N Section 17.6

CUR 412: Game Theory and its Applications, Lecture 12

The Ohio State University Department of Economics Econ 601 Prof. James Peck Extra Practice Problems Answers (for final)

MATH 4321 Game Theory Solution to Homework Two

February 23, An Application in Industrial Organization

Extensive form games - contd

MS&E 246: Lecture 2 The basics. Ramesh Johari January 16, 2007

G5212: Game Theory. Mark Dean. Spring 2017

Mixed Strategies. In the previous chapters we restricted players to using pure strategies and we

Notes for Section: Week 7

ECE 586GT: Problem Set 1: Problems and Solutions Analysis of static games

Answer Key: Problem Set 4

Transcription:

Game Theory for Santa Fe Joel August 14, 2012

OUTLINE 1. Definition of Game 2. Examples 3. Making Predictions 4. Extensive Forms 5. Incomplete Information in Games 6. Communication in Games 7. Design Issues

Taxonomy (Loose) 1. Cooperative versus Noncooperative 2. Extensive versus Normal Form 3. Perfect versus Imperfect Information 4. Incomplete versus Complete Information Today mostly: noncooperative, imperfect information

Game: Definition Set of players, i = 1,..., I. (Need not be finite.) Strategy set S = S 1 S I. Payoff functions u i : S R. This is normal form, non-cooperative game.

Comments on Strategy Set Notation s i S i is a strategy for Player i. s = (s 1,..., s I ) is a strategy profile (sometimes: outcome). s i = (s 1,..., s i 1, s i+1,..., s I ) Comments: Concept of strategy is extremely subtle and general. We ll need to extend to mixed strategies or correlated strategies replacing S by (S 1 ) (S I ) in the first case and by (S) is the second case.

Comments on Utility Functions Note: Player i s utility depends on s i. This is the essence of game theory. Extension to mixed or correlated strategies by linearity. There is a normative theory justifying the extension.

Game Theory for Social Science Start with strategic problem. Translate problem into game. Solve game. Translate solution back to motivating problem. First step is hard (finding interesting problem). Second step requires making choices. It is something of an art. Third step is both technical and philosophical. What is a solution? Fourth step is easy (although there is a tendency to exaggerate).

GAMES AND ECONOMIC BEHAVIOR c 1 UP 1, 1 DOWN 1, 0 Note: Matrix representation of two-player games. One row for each strategy of Player 1 (ROW). One column for each strategy of Player 2 (COLUMN). In each cell of payoff matrix the first number is ROW s payoff, the second is COLUMN s payoff. This game is a decision problem. It is trivial in the sense that R gets the same payoff no matter what he does. But C cares about R s choice of action.

Matching Pennies H T H 1, 1 1, 1 T 1, 1 1, 1 Players simultaneously decide whether to reveal the Head (H) or Tail (T) of a penny. If the pennies match, Row wins; if not, Column wins. Pure conflict ( zero sum ). Intuition (symmetry) suggests that each player should expect a zero payoff, but there no zeros in the payoff matrix.

Coordination Call Wait Call 0, 0 2, 2 Wait 2, 2 0, 0 Players receive the same payoff (no conflict of interest). Friends talk on phone. The line goes dead. They want to resume the conversation. If both try to call, they get busy signals. If neither call, then they cannot talk. Two sensible predictions.

Dominance Left Right Up 10, 1 2, 0 Down 5, 2 1, 100 No matter what Player 2 does, Player 1 plays his top strategy. If Column understands this, she ll play left even though that gives her no chance to get her favorite payoff.

Commitment Left Right Up 10, 0 6, 1 Down 8, 1 5, 0 Player 1 s strategy UP is dominant. If Column understands this, she ll play RIGHT. Player 1 would do better if he did not have the strategy UP because then Column would play LEFT and Player 1 would receive 8 instead of 6.

Prisoner s Dilemma Left Right Up 4, 4 0, 5 Down 5, 0 1, 1 Here both players have dominant strategies. If both play them, result is inefficient.

Battle of the Sexes Left Right Up 4, 1 0, 0 Down 0, 0 1, 4 Players want to coordinate, but have different views of the best place to coordinate.

Risk versus Efficiency Left Right Up 10, 10 0, 9 Down 9, 0 6, 6 Players want to coordinate to receive 10, but playing Up is a bad idea when opponent plays Right. Note: Row wants Column to play Left no matter what Row plans to do.

Matching Pennies with a Cheater HH HT TH TT H 1, 1 1, 1 1, 1 1, 1 T 1, 1 1, 1 1, 1 1, 1 Like matching pennies, but now Column can observe Row s move before deciding what to do. Note difference in strategies.

DEFINITIONS s i is a best response to s i if it solves max s i S i u i (s i, s i). Similarly best response to beliefs. s i is (strictly) dominated (by s i ) if for all s i u i (s i, s i) > u i (s). Strictly dominated strategies cannot be best responses. s i is weakly dominated (by s i ) if for all s i u i (s i, s i) u i (s) with strict inequality for at least one s i. s i is a (weakly) dominant strategy if it (weakly) dominates all other s i S i. Security level: max si S i min s i S i u i (s i, s i ).

SIGNIFICANCE Rational players avoid (strictly) dominated strategies. Security Level provides a lower bound to a player s payoffs. How do these concepts apply to examples?

Mixed Strategies Given any game, one can extend the game by replacing S i by (S i ) and by extending u i to the new, larger domain. An element σ i (S i ) is called a mixed strategy. s i S i σ i (s i ) = 1, σ i (s i ) 0 for all s i S i. σ i (s i ) is interpreted as the probability that Player i selects (pure) strategy s i under σ i. U i (σ) ( s S Π I i=1 σ i (s i ) ) u i (s) Expected utility justifies this.

Why Mix? 1. Raises security level (to 0 in matching pennies). 2. Creates uncertainty for opponents. 3. Makes Best Response Correspondence Continuous. (By linearity, all pure strategies in the support of a best response are best responses.)

More... 4 Increases the power of dominance: Left Right Up 10, 0 0, 10 Middle 0, 10 10, 0 Down a, 1 a, 1 Down is dominated by a pure strategy when a < 0. It is dominated by a mixed strategy when a < 5. It is not dominated (by a pure strategy) when a 0. It is dominated (by a mixed strategy) when a < 5. It is not dominated (by a pure or mixed strategy) when a 5. Unless otherwise noted from now on, strategy means mixed strategy.

Interpretation of Mixed Strategy Conscious Randomization. Opponent uncertain ( purification ).

Correlated Strategies Π I i 1 (S i) is smaller than (S) (The first looks at only product measures or independent distributions.) For example: Left Right Up.5 0 Down 0.5 (numbers are probabilities, not payoffs)

Implications of Correlation None for security level (linearity of payoffs) Makes it harder for a strategy to be dominated (when I > 2) Useful idea in repeated games, mechanism design, equilibrium theory.

Rationalizability Agents who maximize utility subject to beliefs avoid strictly dominated strategies. If a player believes her opponents maximize subject to beliefs, you can iterate the process. This leads to two closely related ideas: Iterated Deletion of Strictly Dominated Strategies and Rationalizability. The ideas are identical for two player games. In general, dominance is related to being a best response to correlated beliefs. Rationalizability is related to being a best response to (smaller set of) independent beliefs. Dominance Solvable: Iterative process terminates with one strategy profile.

Dominance Solvable Games 1. Are rare. 2. Linear Cournot 3. Guessing Games 4. Often do not provide good predictions (especially when many rounds of deletions are needed).

Equilibrium A strategy profile s is a (Nash) equilibrium if for each i, si best response to s i. The definition has two parts: 1. players best respond to beliefs; 2. beliefs are accurate. is a

Properties of Equilibrium 1. Nash equilibrium does not exist (in pure strategies). 2. Nash equilibrium exists (in mixed strategies); S finite. 3. Nash equilibrium is rationalizable and survives IDDS. 4. Hence dominance solvable implies unique Nash. 5. Games have multiple Nash equilibria. 6. Nash equilibria need not be efficient. 7. Nash equilibrium payoffs are at least equal to security level.

Existence Proof Let φ : Π i (S i ) Π i (S i ) by φ i (σ) = {σ i : σ i is a best response to σ i }. Check that φ satisfies assumptions of Kakutani s Fixed Point Theorem and that a fixed point is a Nash equilibrium.

Lessons of Existence Proof Depends on Expected Utility (convex valued). Can be extended to existence of pure strategy equilibria in games with continuous strategy spaces if u i strictly quasi-concave.

Correlated Equilibrium A probability distribution p( ) (S) is a correlated equilibrium if for all i and s i, if p(s i, s i ) > 0, then s i solves max u i (s s i i, s i )p(s i s i ). s i

Properties 1. Existence (includes convex hull of NE) 2. Simplicity (linear programming characterization, rather than fixed point) 3. Nice Dynamic Properties 4. Some Epistemic Foundations 5. Convenient in Repeated Games 6. Appropriate in Mechanism Design Problems 7. Justified when mediators, jointly observed signals, communication available

Standard Example Left Right Up 6, 6 2, 7 Down 7, 2 0, 0 Correlated equilibrium that induces distribution: Left 1 Up 3 1 Down Right 1 3 3 0

Comments 1. Row is willing to play Up when told because he infers that Left and Right are equally likely. 2. Outcome outside of convex hull of Nash Equilibrium payoffs. 3. NE payoffs are (7, 2), (2, 7), and (14/3, 14/3). The last comes from (2/3, 1/3) symmetric mixed equilibrium. 4. Correlation generates a better symmetric equilibrium payoff (5, 5). 5. How might players arrive at correlated outcome?

Two-Player Zero-Sum Games I = 2, u 1 (s) + u 2 (s) = 0 for all s S. (Example: Matching pennies.) In general, min σ i max σ i u i (σ) max σ i min u i (σ). σ i What is the importance of mixed strategies? Interpretation of LHS? Why is inequality true? Inequality is an equation for zero-sum games. Fundamental Theorem of Two-Player Zero-Sum Games.

Refinements Nash removes strictly dominated strategies (and iterates). What about weakly dominated strategies? Left Right Up 4, 4 0, 0 Down 0, 0 0, 0 Can we trick Nash into removing weakly dominated strategies?

Refinements The program of refining the set of NE has the goal of making more precise predictions.

General Approach Require equilibrium to be robust to trembles. Approach: Given a game Γ, a completely mixed strategy σ and ε (0, 1) Form new game Γ(σ, ε). In new game: 1. Same player set as Γ. 2. Same strategy set as Γ. 3. ũ i (s) = (1 ε)u i (s) + εu i (s i, σ i ) Require a refined equilibrium to be the limit of NE of Γ(σ, ε) as ε approaches 0 for some (trembling-hard perfect) or all (truly perfect) σ.

Properties (Mostly Good) 1. Existence (for perfect). 2. Non existence (for truly perfect). 3. Set-valued notion of strategic stability results non-existence problem. 4. Eliminates weakly dominated strategies. 5. Eliminates incredible threats.

Properties (Mostly Bad) 1. Still leaves multiple equilibria in interesting cases (coordination, signaling, repeated games). 2. Not robust in precise sense(s). 3. Strong demands on rationality of players. 4. Often provide poor predictions. 5. Strong demands on rationality of modelers (hard to compute).

Why Bother? The Case Against Refinements typically do not eliminate mixed equilibria in pure coordination games. Refinements do not reduce the large set of equilibria in repeated games. In generic normal-form games, pure strategy NE are strict, and satisfy standard refinements. s is a strict Nash Equilibrium if s i is unique best response to s i.

Why Bother? The Case For 1. Interesting dynamic games are non-generic. 2. Refinement program makes useful predictions in some settings. 3. Baseline for further analysis.

Dynamics 1. Eliminate Coordination Failure mixed equilibria. 2. Allow for boundedly rational agents. 3. Link to evolutionary models. 4. Some potential to distinguish between strict equilibria 5. Convergence problematic.

Bounded Rationality 1. Most dynamic, evolutionary arguments. 2. Quantile Response Equilibrium. 3. Level k analysis

Quantal-Response Equilibrium 1. Random choice, with higher probabilities on better performing strategies. 2. Logit Variation: One parameter family moving from best responding to uniform distribution over strategy space. 3. Equilibrium analysis.

Assessment 1. General. 2. Smoothes responses. 3. Requires more strategic sophistication than standard model. 4. Organizes observations in centipede games. 5. Tension between flawed optimization and perfect sophistication.

Level k Reasoning 1. Level 0 arbitrary. 2. Level k best response to belief that the population is level k 1. 3. Free parameters: specification of level 0, distribution of k.

Assessment 1. General, easier to compute than equilibria. 2. Computing best responses is still hard for a computer scientist. 3. Striking results in guessing games. 4. Good ex post fits. 5. Tension between stupid beliefs and smart optimization.

Other Approaches 1. Risk Dominance, Pareto Dominance (or other Harsanyi-Selten criteria) 2. New Age: I predict the equilibrium I like best. 3. Coalition (eq, Strong Nash, Renegotiation Proof)

Extensive Form 1. Game is: finite set of nodes: X ; actions A; players {1,....I }. 2. p : X {X }. p(x) is the immediate precedessor of x; nonempty for all but one node. x 0, the initial node is the exception. Successors of x, s(x) p 1 (x). Terminal nodes, T X have no successors. 3. α : X \ {x 0 } A is the action that leads to x. 4. Choices at x: c(x) {a A : a = α(x ) for some x s(x)}. It takes different actions to get to different nodes: x, x s(x), x x, implies α(x ) α(x ).

5. A partition H of X and a function H : X H. Associated with each node x is an information set H(x) H. c(x) = c(x ) if H(x) = H(x ) (otherwise one could distinguish elements in an information set). C(H) {a A : a c(x) for some x H} are the choices at H. 6. ι : H {0, 1,..., I } identifies a player to each information set. H i = {H H : i = ι(h)}. 7. ρ : H A [0, 1] such that ρ(h, a) = 0 if a / C(H) and a C(H) ρ(h, a) = 1 for all H H 0. ρ describes nature s moves. 8. u i : T R utility functions for each agent.

Features 1. Strategy: complete contingent plan (what to do at every information set). 2. Mixed strategy: probability distribution over pure strategies. 3. Outcome: probability distribution over terminal nodes. 4. Behavior strategy: independent probability distribution at each information set. 5. Can turn extensive game into strategic game. (And conversely) 6. Nature is a trick that can be used to incorporate incomplete information. 7. Perfect information game: all information sets are singletons. 8. Perfect Recall: formalization of no player forgets.

Two Classical Results 1. Perfect information games have pure strategy equilibria. 2. In games with perfect recall, any outcome induced by mixed strategies can be induced by behavior strategies.

Subgames Given an extensive form game, a subgame is a subset of the nodes for which the restrictions of predecessor function is still well defined; there is a unique initial node; and in which if x is in the subgame so are all nodes in H(x). (Start with an arbitrary singleton information set and include all successors. If you can do this and you hit all nodes in all successor information sets, the you have a subgame.) Subform: Same as subgame except that you can start with a non-trivial information set. (If so, then add initial move of nature.)

Belief-based Refinements for Extensive Games Many variations, intuitively appealing and sometimes more tractable than trembles.

Information 1. Extensive Form: Move by nature Information Sets 2. Normal Form Type space Θ = ΠΘi Prior on Θ. Agent i learns θ i Θ i Strategies are type contingent actions. Examples coming. Game theory handles incomplete information by reinterpreting general formulation.

Signaling 1. Two players, S (sender) and R (receiver) 2. Nature picks t type of sender. p(t) is probability that type is t. 3. Sender observes t, selects signal, s. 4. Receiver observes s (but not t), selects action a. 5. U i (a, t, s) payoff function. Standard application: S is worker, t is ability, s is education, R is proxy for a market that sets the wage; a is the wage. Possible preferences: U R (a, t, s) = (a t) 2 (market pays wage equal to expected ability) U S (a, t, s) = a αs 2 /t (workers like higher wages and lower signals, marginal cost of producing signal decreases with ability).

Spence Model Is it possible for the signal to convey information to the receiver? Answer: Maybe not. Suppose that s(t) s. The best response for the Receiver includes a(s ) = arg max EU R (a, t, s ) (prior optimal action). If one can find a(s) for s s such that U S (a (s ), t, s ) U S (a(s), t, s) for all t and s s, then there is a pooling equilibrium outcome. Finding such an a( ) is not hard in leading examples (in the labor market, let a(s) 0, so that in a putative pooling equilibrium, agents get the average wage for signal s and zero otherwise).

Definition of Equilibrium 1. Sender strategy: σ(t) mapping type to signal. 2. Receiver strategy: α(s) mapping signal to action. 3. Receiver belief: µ(t s) updating beliefs given signal. (σ, α, µ ) is a (weak perfect Bayesian) equilibrium if: 1. σ (t) solves max s U S (α (s), t, s) all t. 2. α (s) solves max a EU R (a, t, s)dµ(t s) all s. 3. µ derives from prior and σ using Bayes s Rule (whenever possible).

Single-Crossing Condition If t > t, and s > s, then U S (a, t, s ) = U S (a, t, s) implies that U S (a, t, s ) > U S (a, t, s). This is the fundamental sorting condition that arises in many applications of information economics. Geometrically it states that indifference curves in (a, s) space of different types cross at most once. Mathematically it can be thought of as a supermodularity assumption on utility. Economically, it says that higher types are more willing to use higher signals.

When Does Single-Crossing Fail? One Example: Cheap Talk (U S ( ) independent of s).

Separating Equilibrium Let BR(s, t) be the Receiver s best response to the signal s given type t: BR(s, t) solves: max a U R (a, t, s). 1. Lowest type as in complete information: s 0 solves max s U S (BR(s, 0), 0, s). Solution is s (0). 2. Next type does as well as possible constrained by lower type does not want to imitate. That is, given s (t), s (t + 1) is solution to max s U S (BR(s, t + 1), t + 1, s) subject to U S (BR(s (t), t), t, s (t)) U S (BR(s, t + 1), t, s).

Details 1. Can one really solve the problems in the construction? 2. I described a path of play. How do you specify complete strategies? 3. Given the strategies, are they really equilibrium strategies? The answer to (1) is, in general, no (even with single crossing). You might get a boundary solution. That is, you might have all high types sending the highest message. This can be ruled out with a boundary condition. There are many answers to (2), but the most sensible is for R to assume that signals s (s (t 1), s (t)) come from t 1 and so a (s) = BR(s, t 1). This answer raises two questions: why is s (t 1) < s (t)? why does this support the equilibrium? In both cases the answer is single crossing. The receiver best responds by definition. All we know about the sender is that the sender of type t 1 is indifferent between sending her message (s (t 1)) and the message of type t. With single-crossing, this is enough. Game Theory for Santa Fe

Properties of Separating Equilibria Multiplicity Pareto Ranked from the viewpoint of Sender. Good from the viewpoint of Receiver. Sensitive to support of prior. Excessive investment in signal Reflect on Value of Information in this setting.

Refinements Select Pareto dominating separating equilibrium

My presentation did not include anything that follows

Simple Cheap Talk S learns θ, selects a message m M and R selects an action a A. Strategies: α(a m) for R, σ(m θ) for S. Beliefs: µ(θ m) As in Spence, but preferences do not directly depend on m.

SIMPLE MODEL 1. One Dimensional 2. A = R, Θ = [0, 1] 3. Quadratic preferences, uniform bias: U S (a, θ) = (a θ b) 2, U R (a, θ) = (a θ) 2. 4. Add uniform prior to quadratic preferences to get linear-quadratic model.

Linear-Quadratic Special Case 1. Closed form solutions. 2. Mean a sufficient statistic for R. 3. Ex ante interests aligned: EU S = E(y θ b) 2 = E(y θ) 2 2bE(y θ) Eb 2 = EU R b 2

STRATEGIC ASPECTS 1. Babbling Outcome Always Exists. 2. Communication Requires Some Common Interests.

TAXONOMY Communication can change beliefs, actions, and payoffs. 1. Informative communication changes beliefs: µ ( m) is not constant on the equilibrium path. 2. Influential communication changes actions: α ( m) is not constant on the equilibrium path. 3. Payoff relevant communication changes R s payoff: E[U R (α (σ ), θ)] > max a A E[U R (a, θ)].

THREE REASONS FOR MULTIPLE EQUILIBRIA 1. Multiple responses off-the-path 2. Multiple meanings of words 3. Multiple equilibrium type-action distributions

I: OFF THE PATH Familiar indeterminacy. Consistent with unique equilibrium outcomes.

II: MEANINGS INDETERMINATE If there is an equilibrium in which you enter my office when I say come in and leave when I say goodbye, there is also an equilibrium in which you enter my office when I say goodbye and leave when I say come in. The two equilibria are equivalent in that they preserve the same relationship between types and action (and payoffs). This multiplicity is both good and bad.

ENDOGENOUS MEANING 1. Elegant abstraction from natural language. 2. Emphasizes that coordination failure is possible. 3. Natural language does mean something. The first two comments are strengths, the third a weakness.

III: Multiple Equilibrium Type-Action Distributions Common-interest games have both babbling and fully revealing equilibria. To make predictions you must resolve Indeterminacy III.

IMPROVING SENDER S INFORMATION HURTS Compare: If θ is uniform on [0, 1], then U S (a, θ) = (a θ.15) 2 U R (a, θ) = (a θ) 2 if S knows θ, most informative equilibrium reports whether θ <.2. if S knows only whether θ >.5, there is an equilibrium in which S reports truthfully.

MANY SENDERS Obtaining advice from different experts with similar information. Applications: 1. Arbitration 2. Legislative Decision Making

FOCUS ON FULL REVELATION More information transmission possible with multiple Senders. Full revelation possible if 1. Large enough domain (one dimensional). 2. Close preferences. 3. Unbounded domain and multiple Senders Equilibria that Approximate Full Revelation

Are Fully Revealing Equilibria Good Predictions? Wife wants to dine out and avoid enemy. Prefers to stay home than dine out not knowing enemy s location. Husband wants to stay at home. Prefers to dine without enemy than with enemy. Revealing equilibrium (good for wife). Pooling equilibrium (good for husband). Add more informed agents if you wish.

INSTITUTION DESIGN Abandon simple strategic form, study R optimal arrangements under different assumptions. Most analysis for one-dimensional, linear-quadratic example.

ARBITRATION Disinterested third party asks S for information and is able to commit to a decision rule. How to maximize R s utility? Delegation with constraint Sender can take her favorite action as long as it is not too large Better than cheap talk No better than babbling for sufficiently high biases

MEDIATION Third-party can collect information, but the Receiver retains decision-making authority and is constrained to make a best response given his information. 1. Generically better than direct cheap-talk (when mediation is informative) 2. When the largest partition has cardinality k, mediation gets you k.5 steps 3. Can be implemented with noisy channels or multiple rounds

NEGOTIATION Extended cheap talk. Logically (weakly) not as good as mediation, but performs as well for intermediate biases.

PERSUASION Sender can commit. Sender selects q(m θ): probability that the Sender uses the message m when the state is θ, which generates beliefs: and µ(θ m) = p(θ)q(m θ) θ p(θ )q(m θ ) P(m) = q(m θ)p(θ). Receiver s best response to µ: a R (µ) Maximum expected payoff for the Sender when she induces the beliefs µ is Û S (µ) = θ Θ U S (a R (µ), θ)µ(θ).

Only constraint, Bayesian Plausibility: µ(θ m)p(m) p(θ). (1) m M So Sender solves: max Û S (µ(m))p(m)dm subject to (??).

VERIFIABLE INFORMATION M consists of all subset of Θ. θ Θ can send only messages m with θ m.

CENTRAL RESULT 1. Unraveling from the top in monotone models. 2. Skeptical Strategy for Receiver: Assume message is worst available for S.

Conclusion None.