Information Acquisition under Persuasive Precedent versus Binding Precedent (Preliminary and Incomplete)

Similar documents
Information Acquisition under Persuasive Precedent versus Binding Precedent (Preliminary and Incomplete)

Learning While Setting Precedents

Online Appendix for Debt Contracts with Partial Commitment by Natalia Kovrijnykh

Microeconomic Theory II Preliminary Examination Solutions

Finite Memory and Imperfect Monitoring

Reputation and Signaling in Asset Sales: Internet Appendix

GAME THEORY. Department of Economics, MIT, Follow Muhamet s slides. We need the following result for future reference.

Game Theory: Normal Form Games

Auctions That Implement Efficient Investments

Revenue Management Under the Markov Chain Choice Model

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2017

PAULI MURTO, ANDREY ZHUKOV

Course Handouts - Introduction ECON 8704 FINANCIAL ECONOMICS. Jan Werner. University of Minnesota

Comparing Allocations under Asymmetric Information: Coase Theorem Revisited

Sequential Investment, Hold-up, and Strategic Delay

Online Appendix for Military Mobilization and Commitment Problems

Economics 703: Microeconomics II Modelling Strategic Behavior

4: SINGLE-PERIOD MARKET MODELS

Web Appendix: Proofs and extensions.

Efficiency in Decentralized Markets with Aggregate Uncertainty

Information Aggregation in Dynamic Markets with Strategic Traders. Michael Ostrovsky

Sequential Investment, Hold-up, and Strategic Delay

Cooperation and Rent Extraction in Repeated Interaction

Equilibrium Price Dispersion with Sequential Search

Appendix: Common Currencies vs. Monetary Independence

ISSN BWPEF Uninformative Equilibrium in Uniform Price Auctions. Arup Daripa Birkbeck, University of London.

( ) = R + ª. Similarly, for any set endowed with a preference relation º, we can think of the upper contour set as a correspondance  : defined as

Hedonic Equilibrium. December 1, 2011

An Ascending Double Auction

Real Options and Game Theory in Incomplete Markets

Optimal retention for a stop-loss reinsurance with incomplete information

Topics in Contract Theory Lecture 3

Inter-Session Network Coding with Strategic Users: A Game-Theoretic Analysis of Network Coding

Political Lobbying in a Recurring Environment

On Existence of Equilibria. Bayesian Allocation-Mechanisms

Online Appendix. Bankruptcy Law and Bank Financing

Bargaining and Competition Revisited Takashi Kunimoto and Roberto Serrano

Finite Memory and Imperfect Monitoring

Directed Search and the Futility of Cheap Talk

Collateral and Capital Structure

Best-Reply Sets. Jonathan Weinstein Washington University in St. Louis. This version: May 2015

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program August 2017

STOCHASTIC REPUTATION DYNAMICS UNDER DUOPOLY COMPETITION

Efficiency and Herd Behavior in a Signalling Market. Jeffrey Gao

All-Pay Contests. (Ron Siegel; Econometrica, 2009) PhDBA 279B 13 Feb Hyo (Hyoseok) Kang First-year BPP

OPTIMAL BLUFFING FREQUENCIES

Persuasion in Global Games with Application to Stress Testing. Supplement

KIER DISCUSSION PAPER SERIES

CHOICE THEORY, UTILITY FUNCTIONS AND RISK AVERSION

Extraction capacity and the optimal order of extraction. By: Stephen P. Holland

Microeconomic Theory August 2013 Applied Economics. Ph.D. PRELIMINARY EXAMINATION MICROECONOMIC THEORY. Applied Economics Graduate Program

Value of Flexibility in Managing R&D Projects Revisited

MONOPOLY (2) Second Degree Price Discrimination

ECE 586BH: Problem Set 5: Problems and Solutions Multistage games, including repeated games, with observed moves

Appendix for Growing Like China 1

Chapter 7: Portfolio Theory

1 Dynamic programming

Microeconomics II. CIDE, MsC Economics. List of Problems

Public Goods Provision with Rent-Extracting Administrators

Online Appendix Optimal Time-Consistent Government Debt Maturity D. Debortoli, R. Nunes, P. Yared. A. Proofs

6.207/14.15: Networks Lecture 10: Introduction to Game Theory 2

Week 1 Quantitative Analysis of Financial Markets Basic Statistics A

Optimizing S-shaped utility and risk management

Fundamental Theorems of Welfare Economics

MATH 5510 Mathematical Models of Financial Derivatives. Topic 1 Risk neutral pricing principles under single-period securities models

INTERIM CORRELATED RATIONALIZABILITY IN INFINITE GAMES

Ramsey s Growth Model (Solution Ex. 2.1 (f) and (g))

Martingale Pricing Theory in Discrete-Time and Discrete-Space Models

Alternating-Offer Games with Final-Offer Arbitration

DRAFT. 1 exercise in state (S, t), π(s, t) = 0 do not exercise in state (S, t) Review of the Risk Neutral Stock Dynamics

1 Precautionary Savings: Prudence and Borrowing Constraints

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2017

3.2 No-arbitrage theory and risk neutral probability measure

Dynamic Admission and Service Rate Control of a Queue

Regret Minimization and Security Strategies

When does strategic information disclosure lead to perfect consumer information?

The investment game in incomplete markets

Aggressive Corporate Tax Behavior versus Decreasing Probability of Fiscal Control (Preliminary and incomplete)

Dynamic signaling and market breakdown

An Approximation Algorithm for Capacity Allocation over a Single Flight Leg with Fare-Locking

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India August 2012

BAYESIAN GAMES: GAMES OF INCOMPLETE INFORMATION

Virtual Demand and Stable Mechanisms

An Axiomatic Approach to Arbitration and Its Application in Bargaining Games

Design of Information Sharing Mechanisms

Strategies and Nash Equilibrium. A Whirlwind Tour of Game Theory

Information Disclosure and Real Investment in a Dynamic Setting

Bilateral trading with incomplete information and Price convergence in a Small Market: The continuous support case

Answer Key for M. A. Economics Entrance Examination 2017 (Main version)

Lecture 7: Bayesian approach to MAB - Gittins index

EC487 Advanced Microeconomics, Part I: Lecture 9

Topics in Contract Theory Lecture 1

Envy-free and efficient minimal rights: recursive. no-envy

Problem Set 3. Thomas Philippon. April 19, Human Wealth, Financial Wealth and Consumption

MAT25 LECTURE 10 NOTES. = a b. > 0, there exists N N such that if n N, then a n a < ɛ

Online Appendix to Financing Asset Sales and Business Cycles

Asymmetric Information: Walrasian Equilibria, and Rational Expectations Equilibria

Information aggregation for timing decision making.

10.1 Elimination of strictly dominated strategies

Information Processing and Limited Liability

Transcription:

Information Acquisition under Persuasive Precedent versus Binding Precedent (Preliminary and Incomplete) Ying Chen Hülya Eraslan March 25, 2016 Abstract We analyze a dynamic model of judicial decision making. A court regulates a set of activities by allowing or banning them. In each period a new case arises and the appointed judge has to decide whether the case should be allowed or banned. The judge is uncertain about the correct ruling until she conducts a costly investigation. We compare two institutions: persuasive precedent and binding precedent. Under persuasive precedent, the judge is not required to follow previous rulings but can use the information acquired in an investigation made in a previous period. Under binding precedent, however, the judge must follow previous rulings when they apply. We analyze both a three-period model and an infinite-horizon model. In both models, we find that the incentive to acquire information for the judge is stronger in earlier periods when there are few precedents under binding precedent than under persuasive precedent, but as more precedents are established over time, the incentive to acquire information becomes weaker under binding precedent than under persuasive precedent. Department of Economics, Johns Hopkins University Department of Economics, Rice University 1

1 Introduction We analyze a dynamic model of judicial decision making. A court regulates a set of activities by allowing or banning them. In each period a new judge is appointed and a new case arises which must be decided by the judge appointed in that period. The judges share the same preferences over whether a case should be allowed or banned, but they are uncertain about the correct ruling until a costly investigation is made. Following Baker and Mezzetti [2012], we assume that the judge appointed in a given period can either investigate the case before making a ruling or summarily decide without investigation. We compare two institutions: persuasive precedent and binding precedent. Under persuasive precedent, a judge is not required to follow previous rulings but can use the information acquired by the investigations of previous judges. Under binding precedent, however, a judge must follow previous rulings when they apply. We show that the incentives to acquire information for the appointed judges are stronger in earlier periods when there are few precedents under binding precedent than under persuasive precedent, but as more precedents are established over time, the incentive to acquire information for the appointed judge becomes weaker under binding precedent than under persuasive precedent. To see why, note that the cost of making a wrong summary decision is higher under binding precedent than under persuasive precedent since future judges have to follow precedents when they are binding even if they are erroneous. Hence, a judge who faces few precedents is more inclined to investigate to avoid mistakes under binding precedent. As more precedents are established over time, however, the value of information acquired through investigation becomes lower under binding precedent since future judges may not be able to use the information to make rulings, and this discourages judges from acquiring information under binding precedent. We establish these results first in a simple three-period model and then in an infinite-horizon model. In the infinite horizon model, we show there is a unique Markov perfect equilibrium payoff by showing that the Contraction Mapping Theorem applies. In our model, the uniqueness of Markov perfect equilibrium payoff 2

implies the uniqueness of Markov perfect equilibrium strategy profile. Given the contraction property, value function iteration converges to the unique equilibrium payoff. The numerical results we obtain are consistent with our finding that the appointed judge in earlier periods investigate more under binding precedent than under persuasive precedent, but as more precedents are established over time, eventually the appointed judge investigates less under binding precedent. Related Literature Landes and Posner [1976], Schwartz [1992], Rasmusen [1994], Talley [1999], Bueno De Mesquita and Stephenson [2002], Gennaioli and Shleifer [2007], Baker and Mezzetti [2012], Ellison and Holden [2014], Anderlini, Felli, and Riboni [2014], Callander and Hummel [2014], Callander and Clark [2015]. 2 Model A court regulates a set of activities by allowing or banning them. In each period, a new case arises which must be decided by the appointed judge. The judge prefers to allow activities that she regards as beneficial and ban activities which she regards as harmful. We denote a case by a real number x [0, 1]. The judge has a threshold value [0, 1] such that she regards case x as beneficial and would like it to be permitted if and only if x <. The preference parameter is unknown initially. Specifically, we assume is distributed according to cumulative distribution function F with density function f. The support of is [, ] and 0 < < < 1 holds. In period t {1,... }, a case x t randomly arises according to distribution G on [0, 1]. We assume that the cases are independent across periods. The precedent at time t is captured by two numbers L t and R t with 0 < L t < R t < 1 where L t is the highest case that was ever allowed and R t is the lowest case that was ever banned by time t. We assume L 1 < < < R 1, that is, the precedent at the beginning of period 1 is consistent with the judge s preferences. The timing of the events is as follows. In period t, after case x t is brought to the court, the judge chooses whether to investigate the case or not before deciding 3

whether to permit the activity or ban it. 12 An investigation allows the judge to learn the value of at a fixed cost z. If the case is decided without an investigation, we say the judge made a summary decision. Let s = ((L, R), x). In what follows, for expositional convenience, we refer to s as the state even though it does not include the information about. Let S p denote the set of possible precedents, i.e. S p = {(L, R) [0, 1] 2 : R > L}, and let S denote the set of possible states, i.e. S = S p [0, 1]. Denote the ruling at time t by r t {0, 1}, where r t = 0 if the case is banned and r t = 1 if the case is permitted. After the judge makes her ruling, the precedent changes to L t+1 and R t+1. If x t was permitted, then L t+1 = max{l t, x t } and R t+1 = R t ; if x t was banned, then L t+1 = L t and R t+1 = min{r t, x t }. Formally, the transition of the precedent is captured by the function π : S {0, 1} S p where π(s, 0) is the vector (L, min{r, x}) and π(s, 1) is the vector (max{l, x}, R). We consider two institutions: a binding precedent and a persuasive precedent. Under binding precedent, in period t the judge must permit x t if x t L t and ban it if x t R t. Under persuasive precedent, the judge is free to make any decision. In this case, the role of the precedent is potentially to provide information regarding whether the case is beneficial or not. We assume the violation of a binding precedent is infinitely costly. The payoff of the judge from the ruling r t on case x t in period t is given by u(x t,, r t ) = { 0 if x t < and r t = P, or x t and r t = B, l(x t, ) otherwise, where l(x t, ) is the cost of making a mistake, that is, permitting a case when it is above or banning a case when it is below. Assume that l(x, ) = 0 if x = 1 For expositional simplicity, we assume that the judge investigates the case when indifferent. 2 We assume that the judge learns about her preference parameter through investigation. Alternatively, we can assume that the judge learns about her preferences in terms of the consequences of cases, but does not know how the consequence of a particular case unless she investigates. To illustrate, let c(x) denote the consequence of a case x and assume that c(x) = x + γ. The judge would like to permit case x if c(x) is below some threshold c and would like to ban it otherwise. Suppose that the judge knows c and observe x, but γ is unknown until a judge investigates. This alternative model is equivalent to ours. 4

and l(x, ) < 0 if x. Also assume that l(x, ) is continuous, strictly decreasing in x and strictly increasing in if x > and strictly increasing in x and strictly decreasing in if x <. For example, if l(x, ) = f( x ) where f is a continuous and strictly decreasing function with f(0) = 0, then these assumptions are satisfied. The dynamic payoff of the judge is the sum of her discounted payoffs from the rulings made in each period net of the investigation cost, appropriately discounted, if she carries out an investigation. The discount factor is denoted by δ. Persuasive precedent In the model with persuasive precedent, the payoff-relevant state in any period is the realized case x [0, 1] and the information about. If is known at the time when the relevant decisions are made, then it is optimal not to investigate the case for any x [0, 1] and it is optimal to permit x if x < and to ban x if x >. If is unknown at the time when the relevant decisions are made, a policy for the judge is a pair of functions σ = (µ, ρ), where µ : [0, 1] {0, 1} is an investigation policy and ρ : [0, 1] {0, 1} is an uninformed ruling policy, where µ(x) = 1 if and only if an investigation is made when the case is x; and ρ(x) = 1 if and only if case x is permitted. For each policy σ = (µ, ρ), let V ( ; σ) be the associated value function, that is, V (x; σ) represents the dynamic payoff of the judge when she is uninformed, faces case x in the current period, and follows the policy σ. In what follows, we suppress the dependence of the dynamic payoffs on σ for notational convenience. The policy σ is optimal if σ and the associated value function V satisfy the following conditions: (P1) The uninformed ruling policy satisfies ρ (x) = 1 if max{x,} l(x, )df () > min{x, } l(x, )df () 5

and ρ (x) = 0 if for any case x. max{x,} l(x, )df () < min{x, } l(x, )df () (P2) Given V and the uninformed ruling policy ρ, the investigation policy satisfies µ (x) = 1 if and only if z ρ (x) max{x,} l(x, )df () + (1 ρ (x)) l(x, )df () + δ min{x, } (P3) Given σ, for any state s, the dynamic payoff satisfy V (x) = zµ (x) + (1 µ (x)) [ ρ (x) max{x,} + (1 ρ (x)) l(x, )df () + δ min{x, } 1 0 l(x, )df () V (x )dg(x ) 1 0 ] V (x )dg(x ). Condition (P1) says that the the ruling decision depends on only the current period payoff, and in particular, the judge chooses the ruling that minimizes the expected cost of making a mistake in the current period. This is because under persuasive precedent, the ruling does not affect the judge s continuation payoff. Condition (P2) says that when uninformed, the judge chooses to investigate a case if and only if her dynamic payoff from investigating is higher than her expected dynamic payoff from not investigating. If a judge investigates case x, then becomes known and no mistake in ruling will be made in the current period as well as in the future. In this case, the dynamic payoff of the judge is negative of the cost of investigation. If a judge does not investigate case x, then her dynamic payoff is the sum of the expected cost of making a mistake in the current period and the continuation payoff. This is given in condition (P3). Binding precedent 6

In the model with binding precedent, the payoff-relevant state in any period is the precedent pair (L, R) S p, the realized case x [0, 1], and the information about. If is known at the time when the relevant decisions are made, then it is optimal not to investigate the case for any s, to permit x if x < max{l, }, and to ban x if x > min{r, }. Let C(L, R) denote the expected dynamic payoff of the judge when the precedent is (L, R), conditional on being known when decisions regarding the cases are made where the expectation is taken over before it is revealed and over all future cases x. Formally C(L, R) = 1 [ 1 δ L L l(x, )dg(x)df () + R R ] l(x, )dg(x)df (), (1) where L is the (possibly degenerate) interval [, max{l, }] and R is the (possibly degenerate) interval [min{r, }, ]. Equivalently, 0 if L and R [ 1 L L 1 δ C(L, R) = l(x, )dg(x)df () + ] l(x, )dg(x)df () if L > and R < R R [ 1 L ] L l(x, )dg(x)df () if L > and R 1 δ [ 1 ] l(x, )dg(x)df () if L and R < 1 δ R R To see how we derive C(L, R), note that if < L and x (, L], then the judge incurs a cost of l( x) since she has to permit x; similarly, if > R and x [R, ), then the judge incurs a cost of l(x ) since she has to ban x. follows [ that the expected per-period payoff of a judge conditional on being known is L l(x, )dg(x)df () + ] l(x, )dg(x)df (), and her dynamic payoff in the L R R infinite horizon model is 1/(1 δ) times the per-period payoff. Note that max{l, } is increasing in L and min{r, } is decreasing in R, and therefore C(L, R) is decreasing in L and increasing in R. If is unknown at the time when the decisions regarding the cases are made, a policy for the judge is a pair of functions σ = (µ, ρ), where µ : S {0, 1} is an investigation policy and ρ : S {0, 1} is an uninformed ruling policy, where It 7

µ(s) = 1 if and only if an investigation is made when the state is s, and ρ(s) = 1 if and only if case x is permitted when the state is s. For each policy σ = (µ, ρ), let V ( ; σ) denote the associated value function, that is, V (s; σ) represents the dynamic payoff of the judge when the state is ((L, R), x), is unknown, and she follows the policy σ. In what follows, we suppress the dependence V on σ for notational convenience. The policy σ is optimal if σ and the associated value function V satisfy the following conditions: (B1) Given V, the uninformed ruling policy satisfies ρ (s) = 1 if either x L or x (L, R) and > max{x,} l(x, )df () + δ l(x, )df () + δ 1 0 1 min{x, } 0 and ρ (s) = 0 if either x R or x (L, R) and V (π(s, 1), x )dg(x ) V (π(s, 0), x )dg(x ), < for any state s. max{x,} min{x, } l(x, )df () + δ l(x, )df () + δ 1 0 1 0 V (π(s, 1), x )dg(x ) V (π(s, 0), x )dg(x ), (B2) Given V and the uninformed ruling policy ρ, for any state s, the investigation policy µ satisfies µ(s) = 1 if and only if ρ (s) max{x,} z + 1 L (x) x l(x, )df () + (1 ρ (s)) l(x, )df () + 1 R (x) min{x, } l(x, )df () + δ where 1 A (x) takes the value 1 if x A and 0 otherwise. x 1 l(x, )df () + δc(l, R) 0 V (π(s, ρ (s)), x )dg(x ) 8

(B3) Given σ, for any state s, the dynamic payoff satisfy V (s) = µ (s) [ + (1 µ (s)) z + 1 L (x) [ ρ (s) x ] l(x, )df () + 1 R (x) l(x, )df () + δc(l, R) x max{x,} l(x, )df () + (1 ρ (s)) l(x, )df () + δ min{x, } 1 0 V (π(s, ρ (s)), x )dg(x ) ] Under binding precedent, the ruling decision may change the precedent, which in turn may affect the continuation payoff. As such, condition (B1) says the ruling decision depends on both the current period payoff and the continuation payoff. In particular, the judge chooses the ruling that maximizes the sum of the current period payoff and the continuation payoff, taking into consideration how her ruling affects the precedent in the next period. Condition (B2) says that the judge chooses to investigate a case if and only if her dynamic payoff from investigating is higher than her expected dynamic payoff from not investigating. If a judge investigates case x, then becomes known. When the precedents are binding, however, mistakes in ruling can still happen if < L or if > R. In this case, the dynamic payoff V (s) is the expected cost of making mistakes in the ruling, both in the current period and in future periods, minus the cost of investigation. If a judge does not investigate case x, then her dynamic payoff is the sum of the expected cost of making a mistake in the current period and the continuation payoff. Condition (B3) formalizes this. 3 A three-period model Before we analyze the infinite horizon model, we discuss a three-period model to illustrate some of the intuition. 9

3.1 Persuasive Precedent Consider the judge in period t. If the judge has investigated in a previous period, then is known and judge t permits or bans case x t according to. Suppose the judge has not investigated, then her belief about is the same as the prior. If the judge investigates in period t, her payoff is z in period t and 0 in future periods. The following result says that if the judge is uninformed and does not investigate in period t, then there exists a threshold in (, ) such that she permits x t if it is below this threshold and bans x t if it is above this threshold. Lemma 1. Under persuasive precedent, there exists ˆx (, ) such that if the judge has not investigated in a previous period and does not investigate x t in period t, then she permits x t if x t < ˆx and bans x t if x t > ˆx. Now we analyze the judges investigation decisions. The following lemma says that when the investigation cost is sufficiently low, the judge investigates with positive probability in each period, the cases that the judge investigates in period t forms an interval, and the interval of investigation is larger in an earlier period. Intuitively, for the cases that fall in the middle, it is less clear to a judge whether she should permit it or ban it and the expected cost of making a mistake is higher. Hence, the value of investigation for these cases is higher. Since the judge can use the information she acquires in an earlier period for later periods, the value of investigation is higher in an earlier period, resulting in more cases being investigated in an earlier period. Lemma 2. In the three-period model under persuasive precedent, there exists z > 0 such that if z < z, the judge investigates x t (t = 1, 2, 3) with positive probability in equilibrium. Specifically, there exist x L t and x H t > x L t such that the judge investigates x t if and only if x t [x L t, x H t ]. Moreover, x L 1 < x L 2 < x L 3 and x H 3 < x H 2 < x H 1. 3.2 Binding precedent We first show that in equilibrium, in each period t, the cases that judge t investigates also form a (possibly degenerate) interval under binding precedent. 10

Lemma 3. Under binding precedent, the set of cases that the judge investigates in period t in equilibrium is either empty or convex for any precedent (L t, R t ); if x t / (L t, R t ), then the judge does not investigate x t in period t. In the next proposition, we show that the judge investigates more under binding precedent than under persuasive precedent in period 1, but she investigate less under binding precedent than under persuasive precedent in periods 2 and 3. Proposition 1. The judge investigates less under binding precedent than under persuasive precedent in period 3. Specifically, for any precedent (L 3, R 3 ), the judge investigates x 3 if and only if x 3 (L 3, R 3 ) [x L 3, x H 3 ]. The judge also investigates less under binding precedent than under persuasive precedent in period 2. Specifically, if [, ] [L 2, R 2 ], then the set of cases that the judge investigates in period 2 under binding precedent is the same as [x L 2, x H 2 ]; otherwise the set of cases she investigates under binding precedent is a subset of [x L 2, x H 2 ]. The judge investigates more under binding precedent than under persuasive precedent in period 1, that is, given any precedent (L 1, R 1 ) (, ), the set of cases that judge investigates under binding precedent contains the set [x L 1, x H 1 ]. The reason for the judge to investigate less in period 3 under binding precedent is that investigation has no value if x 3 L 3 or if x 3 R 3 since the judge must permit any x 3 L 3 and must ban any x 3 R 3 no matter what the investigation outcome; moreover, since period 3 is the last period, the information about has no value for the future either. For x 3 (L 3, R 3 ), the judge faces the same incentives under binding and persuasive precedent and therefore investigate the same set of cases. If the precedent in period 2 satisfies [, ] [L 2, R 2 ], then investigation avoids mistakes in ruling in the current period as well as the future period even under binding precedent. In this case, the judge faces the same incentives under binding and persuasive precedent and therefore investigate the same set of cases. However, if the precedent in period 2 does not satisfy [, ] [L 2, R 2 ], then even if x 2 (L 2, R 2 ) and the judge investigates, mistakes in ruling can still happen in period 3 under binding precedent if / [L 2, R 2 ] since the judge is bound to follow the precedent. In this case, 11

the value of investigation is lower under binding precedent than under persuasive precedent and therefore the judge investigates less under binding precedent. Since the precedent in period 1 satisfies [, ] [L 1, R 1 ], investigation avoids mistakes in ruling in the current period as well as in future periods even under binding precedent. However, for x 1 (, ), if the judge does not investigate x 1 and makes a summary ruling, then she changes the precedent in a way that [, ] [L 2, R 2 ]. As discussed in the previous paragraph, the binding precedent arising from a summary ruling potentially results in mistakes in the future and diminishes the judge s incentive to investigate future periods, which in turn lowers the judge s dynamic payoff. Hence, the judge s payoff from not investigating in period 1 is lower under binding precedent than under persuasive precedent, and therefore she has a stronger incentive to investigate under binding precedent. 4 Infinite-horizon model We now consider the infinite-horizon model, that is, T =. 4.1 Persuasive precedent If the judge already investigated in a previous period, then the judge knows the value of and would permit or ban a case according to. We next show that if the judge has not investigated in a previous period, then the set of cases that the judge investigates in period t is convex. Proposition 2. Under persuasive precedent, if the judge investigates x 1 and x 2 > x 1 in period t in equilibrium, then she investigates any x [x 1, x 2 ] in period t in equilibrium. Suppose the set of cases that the judge investigates is nonempty. Let a p = inf{x : µ (x) = 1} and b p = sup{x : µ (x) = 1}. We next show that if the judge faces a case such that there is no uncertainty about what the correct ruling is for that case (that is, if x or if x ), then the judge does not investigate the case, even though the information from investigation is valuable for future rulings. 12

Proposition 3. Under persuasive precedent, the optimal investigation policy satisfies < a p b p <. Let EV = 1 V (x )dg(x ). To find the optimal policy and the value function, 0 note that δev if x, or if x, x V (x) = l(x, )df () + δev if < x < a p, (2) Hence, we have b z if x [a p, b p ], l(x, )df () + δev if b p < x <. x EV = z[g(b p ) G(a p )] + δev [G(a p ) + 1 G(b p )] + a p x l(x, )df ()dg(x) + For any a, b such that < a b <, let h(a, b) = a l(x, )df ()dg(x). Then x b p x l(x, )df ()dg(x). x l(x, )df ()dg(x) + EV = h(ap, b p ) z[g(b p ) G(a p )] 1 δ[g(a p ) + 1 G(b p )] (3) Since the judge is indifferent between investigating and not investigating when x = a p, we have z = a p l(a p, )df () + δev. (4) Similarly, since the appointed judge is indifferent between investigating and not investigating when x = b p, we have z = b p l(b p, )df () + δev. (5) We can solve for EV, a p, b p from equations (3), (4), and (5). Plugging these in 13

(2), we can solve for V (x). 4.2 Binding precedent We now consider binding precedent. Let F denote the set of bounded measurable functions on S taking values in R. For f F, let f = sup{ f(s) : s S}. An operator Q : F F satisfies the contraction property for if there is a β (0, 1) such that for f 1, f 2 F, we have Q(f 1 ) Q(f 2 ) β f 1 f 2. For any operator Q that satisfies the contraction property, there is a unique f V such that Q(f) = f. Recall L is the (possibly degenerate) interval [, max{l, }] and R is the (possibly degenerate) interval [min{r, }, ]. Let A(s) denote the judge s dynamic payoff if he investigates in state s = ((L, R), x), not including the investigation cost. Formally, A(s) = 1 L (x) x l(x, )df () + 1 R (x) l(x, )df () + δc(l, R). x Also, let g p (s) be the judge s current-period payoff if she permits the case without investigation in state s and g b (s) be her current period payoff if she bans the case without investigation in state s. Formally, max{x,} l(x, )df () if x < R, g p (s) = if x R, l(x, )df () if x > L, g b min{x, } (s) = if x L. For any V F and (L, R) S p, let EV (L, R) = 1 0 V (L, R, x )dg(x ). Note that for any s S, µ (s) as defined in (B2) satisfies µ (s) arg max µ[ z + A(s)] µ {0,1} + (1 µ) max{g p (s) + δev (max{x, L}, R), g b (s) + δev (L, min{x, R})}. 14

For V F and any s S, define T V (s) = max{ z + A(s), g p (s) + δev (max{x, L}, R), g b (s) + δev (L, min{x, R})}. Note that V as defined in (B3) satisfies V = T V. We next show that T is a contraction. Proposition 4. T : F F is a contraction. Since T is a contraction, there is a unique V that satisfies T V = V and therefore a unique V that satisfies (B3). Once we solve for V, we can solve for the optimal policies ρ and µ from (B1) and (B2). We next establish that the value function V is decreasing in L and increasing in R and the equilibrium investigation strategy µ is also decreasing in L and increasing in R. This result says that as the precedent gets tighter, the judge investigates less and her payoff also becomes lower. Proposition 5. Suppose the precedent (ˆL, ˆR) is tighter than (L, R), that is, L ˆL < ˆR R. Under binding precedent, for any case x [0, 1], if the judge investigates x under precedent (ˆL, ˆR), then she also investigates x under precedent (L, R), that is, µ (L, R, x) is decreasing in L and increasing in R; moreover, the value functions V (L, R, x) is also decreasing in L and increasing in R. We next show that the set of cases that an appointed judge investigates is convex, and the judge does not investigate any case for which she must follow the precedent in ruling. Proposition 6. Under binding precedent, for any precedent (L, R) S p, (i) if the judge investigates case x 1 and x 2 > x 1 in period t, then she also investigates any x [x 1, x 2 ] in period t, and (ii) the judge does not investigate any case x / (L, R). For any (L, R) S p such that {x : µ (L, R, x) = 1}, let a(l, R) = inf{x : µ (L, R, x) = 1} and b(l, R) = sup{x : µ (L, R, x) = 1}. Proposition 6 implies that when facing the precedent (L, R), the appointed judge investigates x 15

(a(l, R), b(l, R)) where a(l, R) L and b(l, R) R. We refer to (a(l, R), b(l, R)) as the investigation interval under (L, R). We conjecture that if L < a(l, R) < b(l, R) < R, then under precedent (L, R), the judge is indifferent between investigating or not when x = a(l, R) and when x = b(l, R), which implies that the set of cases that the appointed judge investigates is the closed interval [a(l, R), b(l, R)]. Suppose that given the initial precedent (L 1, R 1 ), the set of cases that the judge investigates is nonempty (if it is empty, then no investigation will be carried out in any period). Specifically, the judge investigates x 1 if and only if x 1 [a(l 1, R 1 ), b(l 1, R 1 )]. For notational simplicity, let a 1 = a(l 1, R 1 ) and b 1 = b(l 1, R 1 ). If x 1 [a 1, b 1 ], judge 1 investigates x 1. In this case, since becomes known, no future judge will investigate. If x 1 / [a 1, b 1 ], then the judge makes a summary ruling without any investigation and changes the precedent to (x 1, R 1 ) if he permits the case and to (L 1, x 1 ) if he bans the case. Note that when judge 1 makes a summary ruling, the resulting new precedent satisfies L 2 < a 1 and b 1 < R 2. Monotonicity of µ in Proposition 5 implies that the investigation interval in period 2 satisfies a(l 2, R 2 ) a 1 and b(l 2, R 2 ) b 1 and therefore we have L 2 < a(l 2, R 2 ) b(l 2, R 2 ) < R 2. An iteration of this argument shows that on any realized equilibrium path, the investigation interval is a strict subset of the precedent in any period and is either closed or empty. Denote a nonempty investigation interval on an equilibrium path by [a(l e, R e ), b(l e, R e )]. By Propositions 5 and 6, we have L e < a(l e, R e ) b(l e, R e ) < R e and given the precedent (L e, R e ), the judge is indifferent between investigating x or not if x = a(l e, R e ) or if x = b(l e, R e ). On the equilibrium path, the investigation intervals either converge to or to some nonempty set [â, ˆb] such that if the precedent is (L, R) = (â, ˆb), then a(l, R) = â and b(l, R) = ˆb. More formally, consider a sequence {a n, b n, L n, R n } such that if {x : µ (L n, R n, x) = 1} =, then a n = a(l n, R n ), b n = b(l n, R n ), L n < L n+1 < a(l n, R n ), b(l n, R n ) < R n+1 < R n, and if {x : µ (L n, R n, x) = 1} =, then a n = b n = Ln+Rn 2, a n+1 = a n, b n+1 = b n, L n+1 = L n, R n+1 = R n. Since L 1 < a(l 1, R 1 ) < b(l 1, R 1 ) < R 1, a(l, R) is increasing in L and decreasing in R and b(l, R) is decreasing in L and increasing in R, we can find such a sequence. 16

Note that a n is increasing and b n is decreasing. Since a monotone and bounded sequence converges, we can define â = lim a n and ˆb = lim b n. We next show that in period 1 when the precedent is (L 1, R 1 ), the judge investigates more under binding precedent than under persuasive precedent. But as more precedents are established over time, the judge has less freedom in making her ruling when precedents are binding, and eventually she investigates less than if the precedent is persuasive. Recall that [a p, b p ] denotes the set of cases that the judge investigates under persuasive precedent. Proposition 7. We have (â, ˆb) [a p, b p ] [a(l 1, R 1 ), b(l 1, R 1 )]. Proposition 7 is analogous to Proposition 1 in the three-period model, which says that the judge investigates more under binding precedent than under persuasive precedent in the first period but investigate less under binding precedent in the second and the third periods. 17

5 Appendix Proof of Lemma 1: First note that [ E [u(x t,, B) u(x t,, P )] = l(x t, )df () min{x t, } max{,xt} l(x t, )df () where l(x min{x t, } t, )df () is the judge s expected payoff if she bans x t since she incurs a cost if and only if > x t, and max{,x t} l(x t, )df () is the judge s expected payoff if she permits x t since she incurs a cost if and only if x t >. If x t, then clearly E [u(x t,, B) u(x t,, P )] > 0; and if x t, then clearly E [u(x t,, B) u(x t,, P )] < 0. We next consider x t (, ). Since l(x, ) is increasing in x for > x and l(x, ) < 0 for x, it follows that x t l(x t, )df is increasing in x t. Also, since l(x, ) is decreasing in x for < x and l(x, ) < 0 for x, it follows that xt l(x t, )df is decreasing in x t. Hence, E [u(x t,, B) u(x t,, P )] = x t l(x t, )df () xt l(x t, )df () is increasing in x t. Since E [u(x t,, B) u(x t,, P )] < 0 if x t = and E [u(x t,, B) u(x t,, P )] > 0 if x t =, and E [u(x t,, B) u(x t,, P )] is continuous, it follows that there exists ˆx (, ) such that E [u(x t,, B) u(x t,, P )] < 0 for x t < ˆx and E [u(x t,, B) u(x t,, P )] > 0 for x t > ˆx. Proof of Lemma 2: Consider period 3 first. If x 3 <, then the judge knows that > x 3 and therefore permits the case without investigation. If x 3 >, then the judge knows that > x 3 and therefore bans the case without investigation. x 3 [, ˆx), if the judge does not investigate, she permits the case and her expected payoff is x 3 l(x 3, )df (). For x 3 (ˆx, ], if the judge does not investigate, she bans the case and her expected payoff is x 3 l(x 3, )df (). It follows that the judge s expected payoff if she does not investigate is the highest when x 3 = ˆx and it is equal to ], For 18

ˆx l(ˆx, )df () = ˆx l(ˆx, )df (). Let z = ˆx l(ˆx, )df () > 0. If z < z, then the judge investigates some cases in period 3. Specifically, suppose z < z and let x L 3 < ˆx and x H 3 > ˆx be such that x L 3 l(x L 3, )df () = z and x H 3 l(x H 3, )df () = z. If x 3 [x L 3, x H 3 ], then the judge investigates case x 3 if she is uninformed. Let V p t be the expected continuation payoff of of the judge in period t if no investigation was carried out in any previous period. Then, we have x L V p 3 3 = x l(x, )df ()dg(x) + x H 3 x l(x, )df ()dg(x) [G(x H 3 ) G(x L 3 )]z > z. Now consider period 2. Suppose the judge did not investigate in period 1. If the judge chooses to investigate in period 2, then her payoff in period 2 is z and her expected payoff in period 3 is 0. If the judge chooses not to investigate in period 2, then by Lemma 1, she permits any case x 2 < ˆx and bans any case x 2 > ˆx. Note that if x 2 or if x 2, then her payoff is 0 if she does not investigate since she makes the correct decision. Consider < x 2 < ˆx and suppose the judge does not investigate the case. Since she permits such a case, her expected payoff in period 2 is x 2 l(x 2, )df (). Similarly, for ˆx < x 2 <, if the judge does not investigate the case, she bans it and in this case, her expected payoff in period 2 is x 2 l(x 2, )df (). Now consider the judge s optimal investigation policy in period 2. For x 2 / [, ], since the judge s expected payoff is δv 3 > δz if she does not investigate the case and z if she investigates, it is optimal for her not to investigate x 2. For < x 2 < ˆx, if z x2 l(x 2, )df () + δv p 3 then it is optimal for the judge to investigate x 2 in period 2. Similarly, for ˆx < x 2 < x, if z x 2 l(x 2, )df () + δv p 3, then it is optimal for the judge to investigate x 2 in period 2. For z < z, since 19

V p 3 < 0, there exist x L 2 (, x L 3 ) and x H 2 (x H 3, ) such that z = x L 2 l(x L 2, )df () + δv p 3 = x H 2 l(x H 2, )df () + δv p 3. For x 2 [x L 2, x H 2 ], it is optimal for the judge to investigate x 2. Thus, we have x L V p 2 [ x ] 2 = l(x, )df () + δv p 3 dg(x) + x H 2 Note that V p 3 = max {a,b [, ],b>a} a x [ x l(x, )df () + δv p 3 l(x, )df ()dg(x)+ l(x, )df ()dg(x) [G(b) G(a)]z, b x and V p 3 < 0. It follows that V p 2 < V p 3. Now consider period 1. If z δv p 2, then the judge investigates all cases in period 1. Suppose z < z and z > δv p 2, then by a similar argument as in period 2, there exist x L 2 (, x L 3 ) and x H 2 (x H 3, ) such that ] dg(x) [G(x H 2 ) G(x L 2 )]z z = x L 1 l(x L 1, )df () + δv p 2 = x H 1 l(x H 1, )df () + δv p 2. For x 1 [x L 1, x H 1 ], it is optimal for the judge to investigate x 1 in period 1. Proof of Lemma 3: Consider period 3 first. Suppose the judge has not investigated in a previous period. Recall that under persuasive precedent, the judge investigates x 3 if and only if x 3 [x L 3, x H 3 ]. Since under binding precedent, investigation has no value if x 3 L 3 or if x 3 R 3, the judge investigates x 3 if x 3 [x L 3, x H 3 ] (L 3, R 3 ). Hence, the set of cases that the judge investigates in period 3 is either empty or convex and the judge does not investigate x 3 if x 3 / (L 3, R 3 ). Let k(l, R) denote the judge s expected payoff in period t under binding precedent when the precedents are (L, R) in period t conditional on being known where the 20

expectation is taken over before it is revealed and over all possible cases x. Formally [ k(l, R) = L L l(x, )dg(x)df () + R R ] l(x, )dg(x)df () where L is the (possibly degenerate) interval [, max{l, }] and R is the (possibly degenerate) interval [min{r, }, ]. Note that k(l, R) 0 and k(l, R) < 0 if L > or if R <. Note also that k(l, R) is decreasing in L and increasing in R. To prove the lemma for periods 2 and 3, we first establish Claim 1 below. Let EV b t (L, R) denote the judge s expected equilibrium continuation payoff in period t under binding precedent given that the precedent in period t is (L, R) and no investigation has been made in a previous period. Claim 1. If EV b t (L, R) is decreasing in L and increasing in R, then the set of cases that the judge investigates in period t 1 is either empty or convex for any precedent in period t 1. Proof: Suppose that EVt b (L, R) is decreasing in L and increasing in R. Fix the precedent in period t 1 and denote it by (L t 1, R t 1 ). Suppose that the judge investigates cases x and x > x in period t 1. We next show that the judge also investigates case ˆx [x, x ]. Let g p (L, R, x) be the judge s current-period payoff if she permits the case without investigation in state s = (L, R, x) and g b (s) be her current-period payoff if she bans the case without investigation in state s. Note that for any (L, R), g p (L, R, x) is decreasing in x and g b (L, R, x) is increasing in x. Suppose ˆx (L t 1, R t 1 ). If the judge investigates ˆx, then her continuation payoff is z + δk(l t 1, R t 1 ). Suppose the judge does not investigate ˆx and without loss of generality, suppose it is optimal for her to permit ˆx if she does not investigate it. Since the judge investigates x under precedent (L t 1, R t 1 ), we have z+δk(l t 1, R t 1 ) g p (L t 1, R t 1, x ) + δevt b (max{x, L t 1 }, R t 1 ). Since g p is decreasing in x, we have g p (L t 1, R t 1, x ) > g p (L t 1, R t 1, ˆx). Moreover, since ˆx > max{x, L t 1 } and EVt b is decreasing in L, we have EVt b (max{x, L t 1 }, R t 1 ) > EVt b (ˆx, R t 1 ). Hence, we have z + δk(l t 1, R t 1 ) g p (L t 1, R t 1, ˆx) + δevt b (ˆx, R t 1 ), which implies that it 21

is optimal for the judge to investigate case ˆx. Suppose ˆx L t 1. Then the judge has to permit ˆx regardless of whether she investigates it or not. Hence, the judge investigates ˆx if z + δk(l t 1, R t 1 ) δev b t (L t 1, R t 1 ). Since the judge investigates x < ˆx, we have z +δk(l t 1, R t 1 ) δev b t (L t 1, R t 1 ), implying that the judge investigates ˆx. A similar argument shows that the judge investigates ˆx if ˆx R t 1 as well. Hence, the set of cases that the judge investigates in period t 1 is either empty or convex for any precedent in period t 1. Now consider period 2 and suppose the judge did not investigate in period 1. Consider precedents (L 3, R 3 ) and (ˆL 3, ˆR 3 ) such that ˆL 3 L 3 and ˆR 3 R 3. As shown before, under the precedent (L 3, R 3 ), the judge s optimal policy is to investigate x 3 if x 3 (L 3, R 3 ) (x L 3, x H 3 ) and otherwise to make a summary ruling. By following the same policy under precedent (ˆL 3, ˆR 3 ), the judge receives the same payoff as under precedent (L 3, R 3 ). Hence, EV3 b (L 3, R 3 ) EV3 b (ˆL 3, ˆR 3 ). By Claim 1, the set of cases that the judge investigates in period 2 is either empty or convex. Consider x 2 / (L 2, R 2 ). The difference in the judge s continuation payoff in period 2 if she investigates the case and if she does not is given by z + δk(l 2, R 2 ) δev b 3 (L 2, R 2 ). Since EV b 3 (L 2, R 2 ) > z + k(l 2, R 2 ), it follows that the judge does not investigate x 2 / (L 2, R 2 ) in period 2. Now consider period 1. Consider precedents (L 2, R 2 ) and (ˆL 2, ˆR 2 ) such that ˆL 2 L 2 and ˆR 2 R 2. We next show that if judge 2 follows the same policy under precedent (ˆL 2, ˆR 2 ) as the optimal policy under (L 2, R 2 ), then the judge s continuation payoff is higher under precedent (ˆL 2, ˆR 2 ) than under (L 2, R 2 ). First consider x 2 such that the judge investigates x 2 under precedent (L 2, R 2 ). Note that x 2 (L 2, R 2 ). In this case, the judge s continuation payoff is z under either (L 2, R 2 ) or (ˆL 2, ˆR 2 ). Next consider x 2 such that the judge makes a summary ruling and permits x 2 under precedent (L 2, R 2 ). In this case, the precedent in period 3 becomes (max{x 2, L 2 }, R 2 ). If the judge follows the same policy under precedent (ˆL 2, ˆR 2 ), then the precedent in period 3 becomes (max{x 2, ˆL 2 }, ˆR 2 ). Since EV3 b (L, R) is decreasing in L and increasing in R, we have EV3 b (max{x 2, L 2 }, R 2 ) EV3 b (max{x 2, ˆL 2 }, ˆR 2 ). Since the 22

judge s period 2 payoff is the same under either (L 2, R 2 ) or (ˆL 2, ˆR 2 ), it follows that her continuation payoff in period 2 is higher under precedent (ˆL 2, ˆR 2 ) than under (L 2, R 2 ). A similar argument shows that the result hold for x 2 such that the judge makes a summary ruling and bans x 2 under precedent (L 2, R 2 ). Hence, EV b 2 (L, R) is decreasing in L and increasing in R. And by Claim 1, the set of cases that the judge investigates in period 1 is either empty or convex. Proof of Proposition 1: Consider period 3 first. Suppose the judge did not investigate in a previous period. As shown in the proof of Lemma 3, under binding precedent, the judge investigates x 3 if x 3 [x L 3, x H 3 ] (L 3, R 3 ). Now consider period 2 and suppose that the judge did not investigate in period 1. Recall that under persuasive precedent, the judge investigates x 2 if and only if x 2 [x L 2, x H 2 ] where [x L 2, x H 2 ] [x L 3, x H 3 ]. First suppose [, ] [L 2, R 2 ]. Then the incentive of the judge in period 2 is the same under binding precedent as under persuasive precedent. In this case, under binding precedent, the judge investigates x 2 if and only if x 2 [x L 2, x H 2 ]. Next suppose [, ] [L 2, R 2 ]. We show below that under binding precedent, the judge does not investigate case x L 2. Recall that the judge is indifferent between investigating and not investigating x L 2 in period 2 under persuasive precedent. That is, we have z = x L 2 l(x L 2, )df () + δk(x L 3, x H 3 ) δz[g(x H 3 ) G(x L 3 )] (6) Consider binding precedent. If x L 2 / (L 2, R 2 ), then the judge does not investigate x L 2 in period 2, as shown in Lemma 3. Suppose x L 2 (L 2, R 2 ). The difference in the judge s continuation payoff between investigating and not investigating x L 2 is z + δk(max{l 2, x L 3 }, min{r 2, x H 3 }) [ ] x L 2 l(x L 2, )df () + δk(max{x L 2, x L 3 }, min{r 2, x H 3 }) δz[g(min{r 2, x H 3 }) G(max{x L 2, x L 3 })] 23

Since max{l 2, x L 3 } = max{x L 2, x L 3 } = x L 3, this is equal to z x L 2 l(x L 2, )df () + δz[g(min{r 2, x H 3 }) G(x L 3 )]. Substituting for z from (6), the difference in the judge s continuation payoff between investigating and not investigating x L 2 is δk(x L 3, x H 3 ) δz[g(x H 3 ) G(x L 3 )] + δz [ G(min{R 2, x H 3 }) G(x L 3 ) ] < δk(x L 3, x H 3 ) < 0 Hence, the judge does not investigate x L 2 under binding precedent. A similar argument establishes that under binding precedent, the judge does not investigate x H 2 in period 2. Given the convexity of the set of cases that the judge investigates under either binding or persuasive precedent, this implies that the set of cases that the judge investigates in period 2 under binding precedent is contained in the set of cases he investigates under persuasive precedent. Now consider period 1. We show below that the judge investigates x L 1 under binding precedent. Recall that the judge is indifferent between investigating and not investigating x L 1 under persuasive precedent. That is, z = x L 1 l(x L 1, )df () + δv p 2 = x L 1 l(x L 1, )df () + δk(x L 2, x H 2 ) + δ 2 [1 G(x H 2 ) + G(x L 2 )]k(x L 3, x H 3 ) δz[g(x H 2 ) G(x L 2 ) + δ(g(x H 3 ) G(x L 3 ))] Under binding precedent, if the judge investigates x L 1 in period 1, her continuation payoff is z; if the judge does not investigate x L 1, her continuation payoff is x L 1 l(x L 1, )df () + δev2 b (x L 1, R 1 ). Note that EV2 b (x L 1, R 1 ) < V p 2 since the judge can follow the same policy under persuasive precedent as the optimal policy under binding precedent and receive a higher payoff. Hence, the judge investigates x L 1 period 1 under binding precedent. in 24

A similar argument establishes that under binding precedent, the judge investigates x H 1. Given the convexity of the set of cases that the judge investigates under either binding or persuasive precedent, this implies that the set of cases that the judge investigates under binding precedent in period 1 contains the set of cases he investigates under persuasive precedent in period 1. Proof of Proposition 2: Since the judge investigates x 1 and x 2, by (P1) and (P2), we have z max for x {x 1, x 2 }. { max{x,} } l(x, )df (), l(x, )df () + δ min{x, } 1 0 V (x )dg(x ). Suppose ˆx [x 1, x 2 ]. Since max{x,} l(x, )df () is decreasing in x, we have max{ˆx,} l(ˆx, )df () max{x1,} l(x 1, )df (). Since l(x, )df () is increasing in x, we have min{x, } It follows that { max{ˆx,} z max min{ˆx, } l(ˆx, )df () min{x 2, } l(x 2, )df (). } l(ˆx, )df (), l(ˆx, )df () + δ min{ˆx, } and therefore the judge investigates ˆx in equilibrium. 1 0 V (x )dg(x ) Proof of Proposition 3: We prove the result by contradiction. Suppose that there exists an equilibrium in which a <. Let EV = 1 0 V (x )dg(x ). Since a <, the judge s dynamic payoff is z if she investigates x = a, and δev if he does not investigate x = a. Since µ (a) = 1, we have z δev. Note that for any 25

x >, the judge s dynamic payoff is z if she investigates, and δev if she does not investigate, it must be the case that the judge investigates any case x >. It follows that b = 1. Moreover, since a <, the judge makes the correct decision for any case x < a. It follows that EV = a 0 δev dg(x) z(1 G(a)) = δg(a)ev z(1 G(a)). Since z δev, this implies that EV > δev, which is impossible since EV < 0. Proof of Proposition 4: Note that µ(s) = 1 if and only if T V (s) = z + A(s). Suppose that V 1, V 2 F and consider any s S. Without loss of generality, suppose that T V 1 (s) T V 2 (s). For notational convenience, define µ 1 and µ 2 relative to V 1 and V 2. There are three cases to consider. (i) Suppose that T V 1 (s) = z+a(s). Since T V 1 (s) T V 2 (s), we have T V 2 (s) = z + A(s). We also have that µ 1 (s) = 1 and µ 2 (s) = 1. It follows that T W 1 (s) T W 2 (s) = 0. (ii) Suppose that T V 1 (s) = g p (s) + δev 1 (max{l, x}, R). Then µ 1 (s) = 0. We have T V 1 (s) T V 2 (s) g p (s) + δev 1 (max{l, x}, R) g p (s) δev 2 (max{l, x}, R) δ δ 1 0 1 0 [ V 1 (max{l, x}, R, x ) V 2 (max{l, x}, R, x ) ] dg(x ) [ V 1 (max{l, x}, R, x ) V 2 (max{l, x}, R, x ) ] dg(x )] δ V 1 V 2. (iii) Suppose that T V 1 (s) = g b (s) + δev 1 (L, min{x, R}). Then a similar argument as in case (ii) shows that T V 1 (s) T V 2 (s) δ V 1 V 2. Since either T V 1 (s) T V 2 (s) = 0 or T V 1 (s) T V 2 (s) δ V 1 V 2 for any s S in all three cases, we have T V 1 T V 2 δ V 1 V 2 and therefore T is a contraction. 26

Proof of Proposition 5: Recall that T V (s) = max{ z + A(s), g p (s) + δev (max{x, L}, R), g b (s) + δev (L, min{x, R})}. Let KV (s) = 1 if T V (s) = z + A(s) and KV (s) = 0 otherwise. To prove the proposition, we first establish the following lemma. Lemma 4. If V F satisfies the following properties: (i) V is decreasing in L and increasing in R, (ii) EV (L, R) EV (ˆL, ˆR) C(L, R) C(ˆL, ˆR), and (iii) KV is decreasing in L and increasing in R, then T V also satisfies these properties, that is, (i) T V is decreasing in L and increasing in R, (ii) ET V (L, R) ET V (ˆL, ˆR) C(L, R) C(ˆL, ˆR), and (iii) KT V is decreasing in L and increasing in R. Proof: We first show that if V F is decreasing in L and increasing in R, then T V is also increasing in L and decreasing in R. Fix x [0, 1]. If V is decreasing in L and increasing in R, then EV (max{x, L}, R) and EV (L, min{x, R}) are decreasing in L and increasing in R. Note that A(s) is decreasing in L and increasing in R, g p (s) is constant in L and increasing in R, g b (s) is constant in R and decreasing in L. Hence, T V (s) is decreasing in L and increasing in R. Let ŝ = (ˆL, ˆR, x). We next show that if V F satisfies properties (i), (ii), and (iii), then ET V (L, R) ET V (ˆL, ˆR) C(L, R) C(ˆL, ˆR). Consider the following cases. (a) Suppose T V (ŝ) = z + A(ŝ). Then KV (ŝ) = 1. Since KV is decreasing in L and increasing in R, we have KV (s) = 1, which implies that T V (s) = z + A(s). Hence T V (s) T V (ŝ) = A(s) A(ŝ). (b) Suppose T V (ŝ) > z + A(ŝ). Without loss of generality, suppose that T V (ŝ) = g p (ŝ) + δev (max{x, ˆL}, ˆR). Note that KV (ŝ) = 0 and x < ˆR. Suppose KV (s) = 1. Then T V (s) = z + A(s) and T V (s) T V (ŝ) < A(s) A(ŝ). Suppose KV [ (s) = 0 and T V (s) = g p (s) + δev (max{x, ] L}, R). Then T V (s) T V (ŝ) = δ EV (max{x, L}, R) EV (max{x, ˆL}, ˆR) δ[ev (L, R) EV (ˆL, ˆR)] δ[c(l, R) C(ˆL, ˆR)]. Suppose T V (s) = g b (s) + δev (L, min{x, R}). There are two cases to consider, either x > ˆL or x ˆL. First suppose x > ˆL. Then g b (ŝ) = 27

g b (s). Since T V (ŝ) g b (ŝ) + δev (ˆL, min{x, ˆR}), it follows that T V (s) T V (ŝ) δev (L, min{x, R}) δev (ˆL, min{x, ˆR}) δ[c(l, R) C(ˆL, ˆR)]. Next suppose x ˆL. Note that T V (ŝ) = g p (ŝ) + δev (max{x, ˆL}, ˆR) = g p (ŝ) + δev (ˆL, ˆR) and A(ŝ) = g p (s) + δc(ˆl, ˆR). Hence, A(ŝ) T V (ŝ) = δc(ˆl, ˆR) δev (ˆL, ˆR). Note also that T V (s) = g b (s) + δev (L, x) g b (s) + δev (L, R) and A(s) g b (s) + δc(l, R). Hence A(s) T V (s) > δc(l, R) δev (L, R). It follows that A(s) T V (s) A(ŝ)+T V (ŝ) > δc(l, R) δev (L, R) δc(ˆl, ˆR)+δEV (ˆL, ˆR) 0. Therefore T V (s) T V (ŝ) A(s) A(ŝ). It follows that for all x [0, 1], we have T V (s) T V (ŝ) A(s) A(ŝ), and therefore ET V (L, R) ET V (ˆL, ˆR) E[A(s) A(ŝ)] = C(L, R) C(ˆL, ˆR). Lastly we show that if V F satisfies properties (i), (ii), and (iii), then KT V is decreasing in L and increasing in R. Since KT V (s) {0, 1} for any s S, it is sufficient to show that if KT V (ŝ) = 1, then KT V (s) = 1. Suppose KT V (ŝ) = 1. Consider x (ˆL, ˆR) first. Then we have z + A(ŝ) max{g p (ŝ) + δet V (x, ˆR), g b (ŝ) + δet V (ˆL, x)}. (7) Note that in this case, A(ŝ) = δc(ˆl, ˆR), A(s) = δc(l, R), g p (ŝ) = g p (s), g b (ŝ) = g b (s). As established earlier, if V F satisfies properties (i), (ii), and (iii), then T V is decreasing in L and increasing in R and C(L, R) C(ˆL, ˆR) ET V (L, R) ET V (ˆL, ˆR). Since L < ˆL < x < ˆR < R, we have max{l, x} = max{ˆl, x} = x and min{x, R} = min{x, ˆR} = x. It follows that ET V (max{l, x}, R) ET V (max{ˆl, x}, ˆR) = ET V (x, R) ET V (x, ˆR) and ET V (L, min{x, R}) ET V (ˆL, min{x, ˆR}) = ET V (L, x) ET V (ˆL, x). Since C(x, R) C(x, ˆR) ET V (x, R) ET V (x, ˆR) and C(L, R) C(ˆL, ˆR) C(x, R) C(x, ˆR), it follows that C(L, R) C(ˆL, ˆR) ET V (max{l, x}, R) ET V (max{ˆl, x}, ˆR). Similarly, since C(L, x) C(ˆL, x) ET V (L, x) ET V (ˆL, x) and C(L, R) C(ˆL, ˆR) > C(L, x) C(ˆL, x), it follows that and C(L, R) C(ˆL, ˆR) ET V (L, min{x, R}) ET V (ˆL, min{x, ˆR}). It then follows from (7) that z + A(s) max{g p (s) + δet V (max{l, x}, R), g b (s) + δet V (L, min{x, R})} and therefore KT V (s) = 1. Next consider x / (ˆL, ˆR), and without loss of generality, suppose that x ˆL. 28