Evaluating Strategic Forecasters. Rahul Deb with Mallesh Pai (Rice) and Maher Said (NYU Stern) Becker Friedman Theory Conference III July 22, 2017

Similar documents
Lecture 3: Information in Sequential Screening

Microeconomic Theory II Preliminary Examination Solutions

Comparing Allocations under Asymmetric Information: Coase Theorem Revisited

Practice Problems 1: Moral Hazard

On Existence of Equilibria. Bayesian Allocation-Mechanisms

Competing Mechanisms with Limited Commitment

Making Collusion Hard: Asymmetric Information as a Counter-Corruption Measure

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

Rational Behaviour and Strategy Construction in Infinite Multiplayer Games

Practice Problems 2: Asymmetric Information

Web Appendix: Proofs and extensions.

In Diamond-Dybvig, we see run equilibria in the optimal simple contract.

Mechanism Design and Auctions

DARTMOUTH COLLEGE, DEPARTMENT OF ECONOMICS ECONOMICS 21. Dartmouth College, Department of Economics: Economics 21, Summer 02. Topic 5: Information

Design of Information Sharing Mechanisms

Single-Parameter Mechanisms

Topics in Contract Theory Lecture 3

Dynamic signaling and market breakdown

Efficiency in Decentralized Markets with Aggregate Uncertainty

Auctions in the wild: Bidding with securities. Abhay Aneja & Laura Boudreau PHDBA 279B 1/30/14

Topics in Contract Theory Lecture 1

Liquidity saving mechanisms

Up till now, we ve mostly been analyzing auctions under the following assumptions:

Inside Outside Information

Financial Economics Field Exam August 2011

Chapter 3. Dynamic discrete games and auctions: an introduction

Homework 3: Asymmetric Information

Microeconomic Theory II Preliminary Examination Solutions Exam date: August 7, 2017

The Benefits of Sequential Screening

Reputation and Securitization

Microeconomics Qualifying Exam

Finite Memory and Imperfect Monitoring

Rationalizable Strategies

Credible Ratings. University of Toronto. From the SelectedWorks of hao li

Directed Search and the Futility of Cheap Talk

Two-Dimensional Bayesian Persuasion

Adverse Selection: The Market for Lemons

Martingale Pricing Theory in Discrete-Time and Discrete-Space Models

Research Article A Mathematical Model of Communication with Reputational Concerns

Lecture 5 Leadership and Reputation

Lecture Notes on Adverse Selection and Signaling

Recap First-Price Revenue Equivalence Optimal Auctions. Auction Theory II. Lecture 19. Auction Theory II Lecture 19, Slide 1

Information aggregation for timing decision making.

6.254 : Game Theory with Engineering Applications Lecture 3: Strategic Form Games - Solution Concepts

Auctions That Implement Efficient Investments

Finite Memory and Imperfect Monitoring

The Irrelevance of Corporate Governance Structure

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India July 2012

Reputation and Signaling in Asset Sales: Internet Appendix

Discussion Paper #1536

Analyst Forecasts : The Roles of Reputational Ranking and Trading Commissions

Optimal Delay in Committees

Gathering Information before Signing a Contract: a New Perspective

Market Manipulation with Outside Incentives

An Ascending Double Auction

Where do securities come from

Timing Decisions in Organizations: Communication and Authority in a Dynamic Environment

UNIVERSITY OF VIENNA

STOCHASTIC REPUTATION DYNAMICS UNDER DUOPOLY COMPETITION

Optimal Long-Term Supply Contracts with Asymmetric Demand Information. Appendix

PhD Course in Corporate Finance

d. Find a competitive equilibrium for this economy. Is the allocation Pareto efficient? Are there any other competitive equilibrium allocations?

Crises and Prices: Information Aggregation, Multiplicity and Volatility

AUCTIONEER ESTIMATES AND CREDULOUS BUYERS REVISITED. November Preliminary, comments welcome.

Repeated Games with Perfect Monitoring

Appendix: Common Currencies vs. Monetary Independence

Dynamic games with incomplete information

Optimal Incentive Contract with Costly and Flexible Monitoring

Mechanism Design and Auctions

Microeconomic Theory III Final Exam March 18, 2010 (80 Minutes)

CMSC 858F: Algorithmic Game Theory Fall 2010 Introduction to Algorithmic Game Theory

Sequential versus Static Screening: An equivalence result

ECON 803: MICROECONOMIC THEORY II Arthur J. Robson Fall 2016 Assignment 9 (due in class on November 22)

Econometrica Supplementary Material

Universidade de Aveiro Departamento de Economia, Gestão e Engenharia Industrial. Documentos de Trabalho em Economia Working Papers in Economics

Lecture 7: Bayesian approach to MAB - Gittins index

PAULI MURTO, ANDREY ZHUKOV

Information Design in the Hold-up Problem

Credible Threats, Reputation and Private Monitoring.

Persuasion in Global Games with Application to Stress Testing. Supplement

UCLA Department of Economics Ph.D. Preliminary Exam Industrial Organization Field Exam (Spring 2010) Use SEPARATE booklets to answer each question

Game Theory. Wolfgang Frimmel. Repeated Games

EXTRA PROBLEMS. and. a b c d

Estimating a Dynamic Oligopolistic Game with Serially Correlated Unobserved Production Costs. SS223B-Empirical IO

Market Liquidity and Performance Monitoring The main idea The sequence of events: Technology and information

Lecture 17: More on Markov Decision Processes. Reinforcement learning

Best-Reply Sets. Jonathan Weinstein Washington University in St. Louis. This version: May 2015

Optimal Delay in Committees

Bargaining and Competition Revisited Takashi Kunimoto and Roberto Serrano

Optimal selling rules for repeated transactions.

Supply Contracts with Financial Hedging

Microeconomic Theory August 2013 Applied Economics. Ph.D. PRELIMINARY EXAMINATION MICROECONOMIC THEORY. Applied Economics Graduate Program

Working Paper. R&D and market entry timing with incomplete information

When does strategic information disclosure lead to perfect consumer information?

Basic Assumptions (1)

Making Money out of Publicly Available Information

Mechanism Design: Single Agent, Discrete Types

A Decentralized Learning Equilibrium

CMSC 474, Introduction to Game Theory 16. Behavioral vs. Mixed Strategies

1 Appendix A: Definition of equilibrium

Transcription:

Evaluating Strategic Forecasters Rahul Deb with Mallesh Pai (Rice) and Maher Said (NYU Stern) Becker Friedman Theory Conference III July 22, 2017

Motivation Forecasters are sought after in a variety of different fields: sports, politics, economics, banking, finance, etc. They differ in their ability (Tetlock 2005, Tetlock and Gardner 2016). Moreover, they forecast strategically to enhance the perception of their ability (Trueman 1994, Hong and Kubik 2003, Ottaviani and Sørensen 2006). Suppose someone is interested in hiring a forecaster. How should she evaluate a strategic forecaster based on the history of his predictions and the realized outcomes?

A topical motivating example (circa 2016) Outcome in question: Will Donald Trump Make America Great Again TM? Unknown state: The underlying policy preferences of the electorate. Agent: A pollster who makes multiple predictions about who will win the election. The pollster can be either good or bad. Signals: The pollster observes noisy information about the underlying state. A good pollster receives more accurate signals than a bad one. Principal: An organization (e.g., an exploratory committee) deciding whether or not to retain the pollster for a future election cycle. Mechanism: The principal s rule determining when to hire. Preferences: The principal only wants to hire a good pollster, and the agent always wants to be hired.

Simplified symmetric model Players: Single principal and single forecaster. Horizon: Forecaster observes information over T periods. An event is publicly realized in period T + 1. Principal then decides whether to hire the agent. State: Persistent binary state of the world ω {H, L}. Each state is equally likely. Types: Agent s private type θ {g, b}. Both types are equally likely.

Simplified symmetric model Signals: At each t T, the agent privately observes a signal s t {h, l}. Each signal s t matches the state ω with probability α θ. The good type is better informed: 1 2 < α b < α g < 1. Outcome: Binary outcome r {h, l}, publicly realized in period T + 1. The outcome r matches the state ω with probability 1 2 < γ 1. Payoffs: 1 if she hires type g, Principal gets 1 if she hires type b, 0 otherwise. { 1 if hired, Agent gets 0 otherwise.

The game At each t, the agent sends a message s t {h, l}. Agent histories: H A = T t=1 {h, l}t {h, l} t 1 denotes the set of agent histories. A period-t history is h A t = (s t, s t 1 ). Principal histories: H P = {h, l} T +1 denotes the set of relevant public histories. A typical element is h P = ( s T, r). Agent s strategy: σ θ : H A {h, l} determines the distribution of messages at each history. Principal s strategy: x r ( s T ) {0, 1} is the decision to hire the agent (or not). Our main focus is on deterministic mechanisms. We will consider both when the principal can and cannot commit to x.

Overview of the game Nature draws unobserved state ω {H, L}; Agent learns type θ {g, b}; Agent observes private signal s 1 {h, l} and reports s 1 {h, l};... Agent observes private signal s T {h, l} and reports s T {h, l}; Outcome r {h, l} publicly realized; Principal makes hiring decision x r ( s T ); Payoffs are realized.

Main questions of interest Is screening possible with such limited instruments? If so, what is the (full commitment) optimal mechanism? How does the optimal mechanism use the sequence of reports? What role does commitment play?

Preview of results With a nonstrategic forecaster, the principal would employ a test that uses both accuracy and consistency. But with a strategic forecaster, rewarding consistency is not IC. Instead, the principal optimally uses accuracy and speed as a screening device. The optimal mechanism takes a very simple form, and can also be implemented without commitment. Randomization can strictly raise the principal s payoff.

Related Literature Testing experts: Foster-Vohra (1998), Olszewski (2015). Adaptive testing: Deb-Stewart (2016). Forecasters: Ottaviani-Sørensen (2006a, 2006b, 2006c), Marinovic-Ottaviani-Sørensen (2013). Empirical: Trueman (1994), Hong-Kubik (2003), Lahiri-Sheng (2008). Statistics: Elliott-Timmerman (2016). Dynamic mechanism design: Battaglini (2005), Boleslavsky-Said (2013).... without money: Guo-Hörner (2015).

Benchmark: Public signals Consider the ( first-best ) benchmark in which the agent s signals are public. The principal s ex-ante expected payoff from any hiring rule x is Π = Expected probability of hiring g Expected probability of hiring b = [ 1 2 Pr(r, st θ = g) 1 ] 2 Pr(r, st θ = b) x r (s T ). r {h,l} s T {h,l} T The optimal hiring decision is essentially a likelihood ratio test: { xr FB (s T 1 if Pr(r, s T θ = g) Pr(r, s T θ = b), ) = 0 otherwise.

Benchmark: Public signals Only the composition of signals (and not their order) matters. Randomization does not help. It is possible to screen even when the outcome is uninformative (i.e., γ = 1 2 ). Screening is possible by using the variance of the observed signals. Since type g receives more precise information, he is more likely to have profiles with a large proportion of identical signals. An informative outcome (γ > 1 2 ) provides an additional instrument for screening as g is also more likely to match the outcome.

Benchmark: Public signals Define 1 r (s t ) := { 1 if s t = r, 0 if s t r. Theorem In the public signal benchmark, the optimal hiring policy x FB is characterized by two cutoffs n and n with T /2 < n T and n T n such that 1 if xr FB (s T t 1 r (s t ) n, ) = 1 if t 1 r (s t ) n, 0 otherwise. The principal screens using both the accuracy and consistency of the signals. A correct set of signals is neither necessary nor sufficient. Signals must either be consistently accurate or (only when γ < 1) consistently inaccurate. If inaccurate, the consistency threshold is harder to meet.

Is this benchmark achievable? Now suppose signals are privately observed by the agent. Does the agent have an incentive to report truthfully if the principal commits to x FB? Consider the corner case where type g sees almost perfectively informative signals, or α g = 1 ε. Then n = T and n 0, implying the agent is never hired after any inconsistent reports. The agent (of either type) will never truthfully report differing signals!

A simple class of mechanisms The principal can screen using a period-t prediction mechanism: The principal picks a period t and asks the agent to predict the eventual outcome. The agent is hired if, and only if, that prediction matches the outcome. Formally: x r ( s T ) = { 1 if s t = r, 0 otherwise. Since type g is more likely to make a correct prediction, prediction mechanisms can achieve nontrivial separation.

Optimal prediction mechanisms Theorem There exists a T > 1 such that the principal s payoff from a period-t prediction mechanism is increasing in t for all t T and decreasing in t for all t T. Intuition for the nonmonotonicity in t: Larger t implies more precise learning and better predictions. But as t, both types learn the state perfectly. This tradeoff balances out at T.

Can we do better? Note that (even with commitment) the mechanisms we consider are not direct mechanisms: x does not depend on θ. A reasonable fit for applications, as menus are not observed in practice. In our setting, a direct mechanism is given by χ r (θ, s T ) {0, 1}. The revelation principle applies, so the principal can do no better than the payoff she gets from an optimal (incentive compatible) direct mechanism.

Restricting attention to x is without loss Lemma There is an optimal indirect mechanism that does not depend on the reported type. Specifically, for any incentive compatible direct mechanism χ r (θ, s T ), there is an indirect mechanism x r (s T ) such that: 1. the principal s payoff from x is (weakly) higher than from χ; 2. the type-g agent has an incentive to report his signals truthfully; and 3. the type-b agent is free to misreport optimally. Proof: χ is a menu with one option for type g and another for type b. IC = b prefers his menu option, so removing it decreases b s payoff. But principal s payoff Π is essentially Pr(hire θ = g) Pr(hire θ = b).

The optimal full-commitment mechanism Theorem Let T := min{t, T }. A period- T prediction mechanism is optimal. The optimal mechanism has the following (nice?) properties: It is a prediction mechanism. It is easy to implement in practice. Truthtelling is optimal for both types of the agent. Optimal screening relies on the rate of learning, not the consistency of reporting.

Proof: Characterizing IC Lemma A mechanism x induces truthful signal reporting from the type-g agent if, and only if, it is one of the following mechanisms: 1. a trivial mechanism: the principal s hiring decision does not depend on reports (so x r (s T ) = x r (ŝ T ) for all s T, ŝ T ); or 2. a period-t prediction mechanism for some 1 t T. Consequently, a mechanism that induces truthful reporting from type g also induces truthful reporting from type b. A consequence of this lemma is that we can restrict attention to prediction mechanisms and the theorem follows.

Characterizing IC: Intuition Note first that any nontrivial mechanism must have x h (s T ) x l (s T ) for all histories s T. Suppose instead that, for some s T, x h (s T ) = x l (s T ) = 1. Then IC implies the agent is always hired and the mechanism is trivial. Symmetric argument if both equal 0.

Characterizing IC: Intuition So suppose there is a history ŝ T 1 where the hiring decision is a nontrivial function of the final report s T : (x h (ŝ T 1, h), x l (ŝ T 1, h)) (x h (ŝ T 1, l), x l (ŝ T 1, l)). }{{}}{{} lottery faced after reporting h lottery faced after reporting l IC implies that the agent must believe r = h is more likely whenever (x h (ŝ T 1, s T ), x l (ŝ T 1, s T )) = (1, 0). }{{} lottery faced after reporting s T If not, he could report s T s T and obtain (x h (ŝ T 1, s T ), x l (ŝ T 1, s T )) = (0, 1). }{{} lottery faced after reporting s T Hence, x is effectively a prediction mechanism at history ŝ T 1. So then x must be a prediction mechanism at all other period-t histories.

No commitment In the optimal mechanism, the principal commits to ignoring information arriving after T. What if the principal cannot commit and must make a sequentially rational hiring decision after the outcome is revealed? This is a dynamic cheap talk game with many equilibria. We will focus on the principal-optimal equilibrium.

No commitment: Principal-optimal equilibrium Theorem For any t, there is a sequential equilibrium of the game without commitment that yields the principal the same payoff as a period-t prediction mechanism. In particular, the principal can achieve the same payoff as in the full-commitment optimal mechanism. Proof: Consider the following strategies: The principal ignores all reports except that at period t. She hires the agent if the report matches the outcome. The agent babbles at all periods except t. At t, he recommends the outcome he thinks is more likely. Both strategies are best responses to each other and all reports are on-path.

Stochastic mechanisms A stochastic mechanism is given by x r ( s T ) [0, 1]. Truthful reporting imposes much less structure when the principal can employ stochastic mechanisms. We show the role of randomization by characterizing the optimal contract for the special case T = 3. A complete characterization for arbitrary T > 3 is hard.

Public signals benchmark, T = 3 Lemma Suppose T = 3. The optimal contract in the benchmark problem with public signals is one of the following: 1. hire the agent if, and only if, all 3 signals are accurate (n = 3, n = 1); 2. hire the agent if, and only if, all 3 signals are consistent (n = 3, n = 0); or 3. hire the agent if, and only if, at least 2 of 3 signals are correct (n = 2, n = 1). When parameters correspond to case (3), then this solution is implementable by a period-3 prediction mechanism. But if parameters correspond to case (1) or case (2), then this solution cannot be implemented consistency is too easily mimicked.

Optimal stochastic mechanism, T = 3 Define 2 := Pr ( t 1 r (s t ) = 2 θ = g) Pr ( t 1 r (s t ) = 2 θ = b). Theorem Suppose T = 3. When 2 0, the period-3 prediction mechanism is an optimal stochastic mechanism. Conversely, when 2 < 0, the following stochastic mechanism is optimal: 1 if s 1 = s 2 = r, 1 x r (s 1, s 2, s 3 ) = 2(γα b +(1 γ)(1 α b )) if s 1 s 2 and s 3 = r, 0 otherwise. This mechanism is IC for both types of the agent. The optimal mechanism when 2 < 0 allows the agent to choose the timing of his prediction, but imposes a stochastic penalty for delay. This stochastic mechanism creates additional separation by further leveraging the speed of learning. But this stochastic mechanism is not implementable without commitment.

Deterministic vs. stochastic optima, T = 3 The optimal (deterministic) mechanism x when 2 < 0 is a period-2 prediction mechanism. In x, the agent is hired after conflicting signals s 1 s 2 with probability 1 2. But 2 < 0 implies that the principal wants to decrease this probability. Randomization helps, as Pr(hiring after s 1 s 2 x) x h (h, l, h) + x h (l, h, h) + x l (l, h, l) + x l (h, l, l) = 4 2 > (γα b + (1 γ)(1 α b )) = x h (h, l, h) + x h (l, h, h) + x l (l, h, l) + x l (h, l, l) Pr(hiring after s 1 s 2 x) The inequality follows as γα b + (1 γ)(1 α b )) > 1 }{{} 2. Pr(s 3=r s 1 s 2,θ=b)

High-level proof approach, T = 3 Proof overview: The optimal mechanism is derived as follows: Consider mechanisms in which type g reports truthfully and type b optimally misreports. Define a relaxed problem where type b is only allowed to misreport at certain histories and where we ignore the IC constraints of type g. Argue that the solution to the relaxed problem cannot feature misreporting at these histories. Define an auxiliary problem which forces type b to report truthfully at these histories. The optimal payoff in this problem is the same as the relaxed problem. Show that solution to this auxiliary problem is implementable in the original problem.

When is randomization optimal? Theorem The principal s payoff from the optimal stochastic mechanism is strictly higher than that from the optimal deterministic mechanism when T > T + 1. The principal strictly benefits from randomization when the time horizon is sufficiently long. Recall that the optimal (deterministic) mechanism for T > T is a period-t prediction mechanism. As in the T = 3 case, the principal benefits by lowering the probability of hiring at histories where same number of h and l signals at period T are reported.

Generalizing the model Two critical assumptions in the model: 1. The agent cares only about being hired. 2. There is a binary outcome. Prediction mechanisms are still optimal if we allow: asymmetric priors about types and states; richer payoffs for the principal (e.g., state- or type-dependent); the principal uses information to take an additional (non-hiring) action; more types and states; and general signal structures. As long as outcomes are binary, IC continues to impose strong restrictions. Additionally, the optimal mechanism does not require commitment. But the optimal stochastic mechanism becomes even harder to characterize (even for T = 3).

Concluding remarks We study the mechanism design problem of evaluating professional forecasters. We show that optimal mechanisms are simple and can be implemented without commitment. Part of a broader agenda of studying dynamic strategic learning environments without money.

Thank you!