Estimation of dynamic discrete choice models

Size: px
Start display at page:

Download "Estimation of dynamic discrete choice models"

Transcription

1 Estimation of dynamic discrete choice models Jean-François Houde Cornell University & NBER February 23, 2018 Estimation of dynamic discrete choice models 1 / 49

2 Introduction: Dynamic Discrete Choices 1 We start with an single-agent models of dynamic decisions: Machine replacement and investment decisions: Rust (1987) Renewal or exit decisions: Pakes (1986) Inventory control: Erdem, Imai, and Keane (2003), Hendel and Nevo (2006) Experience goods and bayesian learning: Erdem and Keane (1996), Ackerberg (2003), Crawford and Shum (2005) Demand for durable goods: Gordon (2010), Gowrisankaran and Rysman (2012), Lee (2013) This lecture will focus on econometrics methods, and next lecture will discuss mostly applications. Next, we will discuss questions related to the dynamic of industries: Markov-perfect dynamic games Empirical model of static and dynamic games 1 These lectures notes incorporate material from Victor Agguirregabiria s graduate IO slides at the University of Toronto. Estimation of dynamic discrete choice models 2 / 49

3 Machine replacement and investment decisions Consider a firm producing a good at N plants (indexed by i) that operate independently. Each plant has a machine. Examples: Rust (1987): Each plant is a Madison WI bus, and Harold Zucher is the plant operator. Das (1992): Consider cement plants, where the machines are cement kiln. Rust and Rothwell (1995): Study the maintenance of nuclear power plants. Related applications: Export decisions (Das et al. (2007)), replacement of durable goods (Adda and Cooper (2000), Gowrisankaran and Rysman (2012)). Estimation of dynamic discrete choice models 3 / 49

4 Bus Replacement: Rust (1987) Profit function at time t: π t = N y it rc it i=1 where y it is the plant s variable profit, and rc it is the replacing cost of the machine. Replacement and depreciation: Replace cost: rc it = a it RC(x it) where RC(x)/ x 0 and a it = 1 if the machine is replaced. In the application, RC(x it ) = θ R0 + θ R1 x it. State variable: machine age xit, choice-specific profit shock {ɛ it (0), ɛ it (1)}. Variable profits are decreasing in the age x it of the aging, and increasing in profit shock ɛ it (a it ): where Y / x < 0. y ij = Y ((1 a it )x it, ɛ it (a it )) Estimation of dynamic discrete choice models 4 / 49

5 Profits and Depreciation Variable profit: Step function { Y (0, ɛ it (1)) RC(x it ) If a it = 1 π it = Y (x it, ɛ it (0)) Otherwise. Aging/depreciation process: Deterministic: x it+1 = (1 a it )x it + 1 Stochastic: x it+1 = (1 a it )x it + ξ t+1 Note: In Rust (1987), x it is bus mileage. It follows a random walk process with a log-normal distribution. Assumptions: 1 Additive separable (AS) profit shock: Y ((1 a)x, ɛ(a)) = θ Y0 + θ Y1 (1 a)x + ɛ(a) 2 Conditional independence (CI): f (ɛ t+1 ɛ t, x t ) = f (ɛ t+1 ) 3 Aging follows is a discrete random-walk process: x it {0, 1,..., M} and matrix F (x x, a) characterizes its controlled Markov transition process. Estimation of dynamic discrete choice models 5 / 49

6 Dynamic Optimization Harold Zucher maximizes expected future profits: ( ) V (a it x it, ɛ it ) = E β τ xit π it+τ, ɛ it, a it τ=0 Recursive formulation: Bellman equation V (a x, ɛ) = Y ((1 a) x) RC(a x) + ɛ(a) +β ( E ɛ V (x, ɛ ) ) F (x x, a) x = v(a, x) + ɛ(a) where V (x, ɛ) max a {0,1} V (a x, ɛ). Optimal replacement decision: { a 1 If v(1, x) v(0, x) = ṽ(x) > ɛ(0) ɛ(1) = ɛ = 0 Otherwise. If {ɛ(0), ɛ(1)} are distributed according to a T1EV distribution with unit variance: Estimation of dynamic discrete choice models 6 / 49

7 Solution to the dynamic-programming (DP) problem Assumptions (1) and (2) imply that we only need numerically find a fixed-point to the Emax function V (x) (M elements): ) V (x) = E ɛ (max v(a, x) + ɛ(a) a ) = E ɛ (max a = Γ(x V ) Π(a, x) + β x V (x )F (x x, a) + ɛ(a) where Π(a, x) = Y ((1 a) x) RC(a x), and Γ(x V ) is a contraction mapping. Matrix form representation using the T1EV distribution assumption: V = ln ( exp ( Π(0) + βf (0) V ) + exp ( Π(1) + βf (1) V )) + γ = Γ( V) where γ is the Euler constant, F (0) and F (1) are two M M conditional transition probability matrix. Estimation of dynamic discrete choice models 7 / 49

8 Algorithm 1: Value Function Iteration Fixed objects: Payoffs (M 1): Π(a) = {θ 0 + θ x (1 a)x i RC(a x)} i=1,...,m for a {0, 1} Conditional transition probability (M M): F (a) for a {0, 1} Stopping rule: η Value function iteration algorithm: F j,k (a) = F (x t+1 = x k x t = x j, a t = a) 1 Guess initial value for V 0 (x). Example: Static value function V 0 (x) = ln (exp (Π(0)) + exp (Π(1))) + γ 2 Update value function iteration k: V k = ln ( exp ( Π(0) + βf (0) V k 1) + exp ( Π(1) + βf (1) V k 1)) + γ 3 Stop if V k V k 1 < η. Otherwise, repeat steps (2)-(3). Estimation of dynamic discrete choice models 8 / 49

9 Policy Function Representation Define conditional choice-probability (CCP) mapping: ( Π(1, x) + β P(x) = Pr x V (x )F (x ) x, 1) + ɛ(1) Π(0, x) + β x V (x )F (x x, 0) + ɛ(0) = exp(ṽ(x)/ (1 + exp(ṽ(x))) = (1 + exp( ṽ(x))) 1 Where, ṽ(x) = v(1, x) v(0, x). (1) At the optimal CCP, we can write the Emax function as follows: [ V P (x) = (1 P(x)) Π(0, x) + e(0, x) + β ] V P (x )F (x x, 0) x [ +P(x) Π(0, x) + e(1, x) + β ] V P (x )F (x x, 1) x where e(a, x) = E(ɛ(a) a = a, x) is the conditional expectation ɛ(a). Estimation of dynamic discrete choice models 9 / 49

10 Policy Function Representation (continued) If ɛ(a) is T1EV distributed, we can write this expectation analytically: e(a, x) = γ ln P(a x). This implicitely define the value function in terms of the CCP vector: [ ] V P = (I βf P) 1 (1 P) (Π(0) + e(0)) (2) +P (Π(1) + e(1)) where F P = (1 P) F (0) + P F (1) and is the element-by-element multiplication operator. Equations 1 and 2 define a fixed-point in P: P = Ψ(P ) where Ψ( ) is a contraction mapping. Estimation of dynamic discrete choice models 10 / 49

11 Algorithm 2: Policy Function Iteration 1 Guess initial value for the CCP. Example: Static choice-probability P(x) = (1 + exp( (Π(x 1) Π(x 0)))) 1 2 Calculate expected value function: [ V k 1 = (I βf k 1) 1 (1 P k 1 ) ( Π(0) + e k 1 (0) ) +P k 1 ( Π(1) + e k 1 (1) ) ] 3 Update CCP: P k (x) = Ψ(P k 1 (x)) = ( ) exp( ṽ(x) k 1 ) where ṽ k 1 = ( Π(1) + βf (1) V k 1) ( Π(0) + βf (0) V k 1). 4 Stop if P k P k 1 < η. Otherwise, repeat steps (2)-(4) Estimation of dynamic discrete choice models 11 / 49

12 Value-function versus Policy-function Algorithms Both algorithms are guaranteed to converge if β (0, 1) Policy-function iteration algorithms converges in fewer steps than value-function iteration. However, each step of the policy-function algorithm is slower due to the matrix inversion. M is typically very large (in the millions). If M is very large, it can be faster and more accurate to find V using linear programing tools (e.g. linsolve in Matlab): ( I βf k 1 ) Vk 1 = (1 Pk 1 ) ( Π(0) + e k 1 (0) ) +P k 1 ( Π(1) + e k 1 (1) ) Suggested algorithm: Ay = b Start with value-function iteration if Vk (x) V k 1 (x) > η 1 Switch to policy-function iteration when V k (x) V k 1 (x) < η 1 Where η 1 < η (e.g. η 1 = 10 2 ) Estimation of dynamic discrete choice models 12 / 49

13 Estimation: Nested fixed-point MLE Data: Panel of choices a it and observed states x it Parameters: Technology parameters θ = {θ Y0, θ Y1, θ R0, θ R1 }, discount factor β, and distribution of mileage shocks f x (ξ it ). Initial step: If the panel is long-enough, we can estimate f x (ξ) from the data. The estimated process can then be discretized to construct ˆF (1) and ˆF (0). Maximum likelihood problem: a it ln P(x it ) + (1 a it ) ln(1 P(x it )) max θ,β i t s.t. P(x it ) = Ψ(x it ) x it In practice, we need two functions: Likelihood: Evaluate L(θ, β) given P(xit ). Fixed-point: Routine that solves P(x it ) for every guess of θ, β. Estimation of dynamic discrete choice models 13 / 49

14 Incorporating Unobserved Heterogeneity Why? Relax the conditional independence assumption. Example: Buses have heterogeneous replacement costs (K types) This increases the number of parameters by K(K 1): {θ 1 R0,..., θr K 0 } + {ω 1,.., ω K 1 } (probability weights). E.g.: discretize a parametric distribution: ln θ i R0 N(µ, σ 2 ) This changes the MLE problem: [ ] ln g(k x i1 ) P k (x it ) a it (1 P k (x it )) 1 a it i k t s.t. P k (x it ) = Ψ k (x it ) x it and type k max θ,β,ω Where g(k x i1 ) is the probability that bus i is type k conditional on initial milage x i1 (i.e. initial condition problem). How to calculate g(k x i1 )? Estimation of dynamic discrete choice models 14 / 49

15 Side note: The initial condition problem Unobserved heterogeneity creates a correlation between the initial state (i.e. x i1 mileage) and types (Heckman 1981). Two solutions: New buses: Exogenous initial assignment g(k x i1 ) = ω k. Limiting distribution: The bus engine replacement creates a finite-state Markov chain defined by F k (x x) = a P k (a x)f (x x, a) for each type k Under fairly general assumptions, this process generates a unique limiting distribution: M π k (x) = F k (x t+1 = x x t = x i )π k (x i ) π k = Fk T π k i=1 We can use the limiting distribution to calculate the type probability conditional on initial mileage: g(k x i1 ) = ω k π k (x i1 ) k ω k π k (x i1) Estimation of dynamic discrete choice models 15 / 49

16 Identification: Residual profit Assumption: Parametric distribution function F ɛ. Standard normalization: σ ɛ = 1. This means that we cannot identify the dollar value of replacement costs. Only relative to variable profits. True in any discrete-choice problem. When profits or output data are available, we can relax this normalization, and estimate σ ɛ (e.g. investment and production data). Estimation of dynamic discrete choice models 16 / 49

17 Identification: Discount Factor The data is summarized by the empirical hazard function: h(x) = Pr(replacement t miles t = x) This corresponds to the reduced form of the model: h(x) = P(x) = F ɛ (ṽ(x)) = F ɛ ( (Π(1, x) Π(0, x)) β x V (x )(F (x x, 1) F (x x, 0)) Claim: β is not identified, unless we parametrize payoffs: Y and RC. If Π(x) is linear in x, then non-linearity in the observed hazard function identifies β. If Π(x) is a non-parametric function, we cannot distinguish between a non-linear myopic model (β = 0), and a forward-looking model (β > 0). What would identify β? Exclusion restriction: The model includes a state variable z that only enters the Markov transition function (i.e. F (x x, z, a)), and not the static payoff function. Estimation of dynamic discrete choice models 17 / 49 )

18 Empirical Hazard Function Estimation of dynamic discrete choice models 18 / 49

19 Identification of β and search for the right specification Estimation of dynamic discrete choice models 19 / 49

20 Main estimation results Estimation of dynamic discrete choice models 20 / 49

21 Patents as options, Pakes (1986) This paper studies the value of patent protection: (i) what is the stochastic process determining the value of innovations?, (ii) how patent protection laws affect the decision to renew patens and the distribution of returns to innovation? The model is an example of an optimal stopping problem. The model is setup with a finite horizon, but it does not have to be. Other examples: retirement, firm exit decisions, technology adoption, etc. Contributions: Illustrate how we can infer the implicit option value of patents (or any other dynamic investment decision) from dynamic discrete choices (i.e. principle of revealed preference). This is done without actually observing profits or revenues from patents. Only the dynamic structure of renewal costs are needed. More technically, the paper is one of the firsts applications of simulation methods in econometrics (very influential). Estimation of dynamic discrete choice models 21 / 49

22 Data and Institutional Details Three countries: France, Germany and UK Renewal date for all patents: n m,t (a) = number of surviving patents at age a in country m from cohort t. Regulatory environment by country/cohort: f : Number of automatic renewal years. L: Expiration date on patent c = {c1,..., c T }: Deterministic renewal cost Estimation of dynamic discrete choice models 22 / 49

23 Country differences in drop-out probabilities Estimation of dynamic discrete choice models 23 / 49

24 Country differences in renewal fee schedules Estimation of dynamic discrete choice models 24 / 49

25 Model setup Consider the renewal problem for patent i Stochastic sequence of returns from patent: r i = {r i1,..., r il } Evolution of returns depend on: 1 initial quality level 2 arrival of substitutes innovations that depreciate the value of the patent 3 arrival of complement innovations that increase its value. Model structural parameters (per country): δ measures the normal obsolescence rate φ and σ determines the arrival rate and magnitude of complementary innovations λ determines to arrival rate of substitute innovations µ0 and σ 0 determines the initial quality pool of innovations Discount factor β is fixed. Estimation of dynamic discrete choice models 25 / 49

26 Stochastic Process Markov process for returns: r it+1 = τ it+1 max{δr it, ξ it+1 } Where, Pr(τ it+1 = 0 r it, t) = exp( λr it ) p(ξ it+1 r it, t) = 1 ( φ t σ exp γ + ξ ) it+1 φ t σ r i0 LN(µ 0, σ 2 0) or more compactly for t > 0, exp( λr it ) If r it+1 = 0 f (r it+1 r it, t) = Pr(ξ it+1 ( < δr it r it, ) t) If r it+1 = δr it 1 φ t σ exp γ+ξ it+1 φ t σ If r it+1 > δr it Estimation of dynamic discrete choice models 26 / 49

27 Optimal stopping problem In the last year, the renewal value depends only on c L and r il : V (L, r il ) = max{0, r il c L } and therefore the patent is renewed if r il > rl = c L. At year L 1, the value is defined recursively: { } V (L, r il 1 ) = max 0, r il 1 c L 1 + β V (L, r il )f (r il r il 1, L 1)dr il This value function is strictly increasing in r il 1 (see proposition 1). Therefore, there exists a unique threshold such that the patent is renewed if r il 1 > r L 1 = c L 1 β r L r L V (L, r il )f (r il r L 1, L 1)dr il Estimation of dynamic discrete choice models 27 / 49

28 Optimal stopping problem (continued) Similarly, for any year t > 0 the value function is defined recursively as follows: V (L, r it ) = max{0, r it c t + β r t+1 which lead to a series of optimal stopping rules: r it > r t = c t β r t+1 V (t + 1, r it+1 )f (r il r it, t)dr it+1 } V (t + 1, r it+1 )f (r it+1 r t, t) Given the function form assumptions on f (r r t, t), the thresholds can be solved analytically by backward induction. Note: When the terminal period is stochastic the value function becomes stationary (i.e. infinite horizon). For instance, optimal stopping problems arise when studying retirement or exit decisions: { } V (s t ) = max 0, π(s t ) + β (1 δ(s t ))V (s t+1 )f (s t+1 s t )ds t+1 Estimation of dynamic discrete choice models 28 / 49

29 Estimation Method Likelihood of the observed renewal sequence N m conditional on the regulation environment Z m = {L m, f m, c m } in country m: L(N m Z m, θ) = max θ Where, Pr(t = t θ, Z m ) = L n m (t) ln Pr(t = t Z m, θ) t=1 r 1 r 2 r t... df (r i1,.., r it 1, r it )df 0 (r i0 ) 0 Monte Carlo integration approximation: 0. Sample ri0 s LN(µ 0, σ0 2) 1. Period 1: 1 Sample τ1 s from Bernoulli with probability exp( λr0 s ) 2 If τ1 s = 1, sample ξ1 s from exponential distribution. Otherwise, do not renew patent: a1 s = 0. 3 Calculate r1 1 4 Evaluate decision: a1 s = 1 if r1 s > r1.... t. Repeat sampling for period t if patent was renewed at t 1 Estimation of dynamic discrete choice models 29 / 49

30 Estimation Method (continued) After collecting the simulated sequences of actions, we can evaluate the simulated choice-probability at period t: P S (t θ, Z m ) = 1 1(a1 s = 1, a2 s = 1,..., at 1 s = 1, at s = 0) S s Numerical problem: P S (t θ, Z m ) is not a smooth function of the parameters θ + equal to zero for some t unless S. Smooth alternative approximation: ( ) exp P S (t θ, Z m )/η ˆP S (t, θ, Z m ) = 1 + t exp( P S (t θ, Z m )/η) Note: All the structural parameters are identified in this model (except β). The implicit normalization is that coefficient on renewal cost c t is one: all the parameters are expressed in dollar. Estimation of dynamic discrete choice models 30 / 49

31 Estimation of dynamic discrete choice models 31 / 49

32 Estimation of dynamic discrete choice models 32 / 49

33 Summary of the Results Main differences across countries: (i) patent regulation rules, (ii) initial distribution of patent returns. Germany has a more selective screening system for granting new patents: higher mean and smaller variance of initial returns r i0. Learning about complementary innovations: φ 0.5. Imply very fast learning/growth in returns. This has important policy implications: Regulator wants to keep initial renewing cost low, and increase them fast to extract rents from high value patents (low distortions after learning is over). Estimation of dynamic discrete choice models 33 / 49

34 The distribution of realized patent value is highly skewed Implied rate of returns on R&R: France = 15.56%, UK = %, Germany = 13.83%. Estimation of dynamic discrete choice models 34 / 49

35 Sequential estimators of DDC models Key references: Hotz and Miller (1993) Hotz, Miller, Sanders, and Smith (1994) Aguirregabiria and Mira (2002) Identification: Magnac and Thesmar (2002), Kasahara and Shimotsu (2009) Consider the following dynamic discrete choice model with additively separable (AS) and conditional independent (CI) errors. A discrete actions. Payoff function: u(x a) State space: (x, ɛ). Where x is a discrete state vector, and ɛ is an A-dimensions continuous vector. Distribution functions: Pr(x t+1 = x x t, a) = f (x x, a) g(ɛ) is a type-1 EV density with unit variance. Estimation of dynamic discrete choice models 35 / 49

36 Bellman Operator Bellman equation: { V (x) = max u(x a) + ɛ(a) + β V (x )f (x } x, a) g(ɛ)dɛ a A x { } = max v(x a) + ɛ(a) g(ɛ)dɛ a A ( ) = ln exp(v(x a)) + γ a = Γ ( V (x) ) Estimation of dynamic discrete choice models 36 / 49

37 CCP Operator Express V (x) as a function of P(a x). V (x) = { P(a x) u(x a) + E(ɛ(a) x, a) + β } V (x )f (x x, a) a x Where, E(ɛ(a) x, a) = 1 P(a x) ( ) 1 v(x a) + ɛ(a) > v(x a ) + ɛ(a ), a a g(ɛ)dɛ e(a, P(a x)) = γ ln P(a x) Estimation of dynamic discrete choice models 37 / 49

38 CCP Operator (continued) In Matrix form: V = P(a) [ u(a) + e(a, P) + βf (a)v ] a [ I β P(a) F (a) ] V = P(a) [ u(a) + e(a, P) ] a a V (P) = [ I β P(a) F (a) ] [ 1 P(a) ( u(a) + e(a, P) )] a a where F (a) is X X and V is X 1. Estimation of dynamic discrete choice models 38 / 49

39 CCP Operator (continued) In Matrix form: V = P(a) [ u(a) + e(a, P) + βf (a)v ] a [ I β P(a) F (a) ] V = P(a) [ u(a) + e(a, P) ] a a V (P) = [ I β P(a) F (a) ] [ 1 P(a) ( u(a) + e(a, P) )] a a where F (a) is X X and V is X 1. The CCP contraction mapping is: P(a x) = Pr = ( v(x a, P) + ɛ(a) > v(x a, P) + ɛ(a ), a a exp ( ṽ(x a, P) ) 1 + a >1 exp ( ṽ(x a, P) ) = Ψ(a x, P) where ṽ(x a, P) = v(x a, P) v(x 1, P). Estimation of dynamic discrete choice models 38 / 49 )

40 Two Special Cases 1 Linear payoff: If u(x a, θ) = xθ, the value function is also linear in θ. V (P) = Z(P)θ + λ(p) Where Z(P) = [ I β a λ(p) = [ I β a P(a) F (a) ] 1 [ P(a) F (a) ] 1 [ a a ] P(a) X ] P(a) e(a, P) Estimation of dynamic discrete choice models 39 / 49

41 Two Special Cases 1 Linear payoff: If u(x a, θ) = xθ, the value function is also linear in θ. V (P) = Z(P)θ + λ(p) Where Z(P) = [ I β a λ(p) = [ I β a P(a) F (a) ] 1 [ P(a) F (a) ] 1 [ a a ] P(a) X ] P(a) e(a, P) 2 Absorbing state: v(x 0) = 0 (e.g. Exit or retirement). This change the value function: V (x, ε) = max u(x) + ε(1) + β E ε [V (x, ε )] F (x x), ε(0) }{{} x = V (x ) As before, the expected continuation value is: ( ( V (x) = log exp(0) + exp = log (1 + exp(v(x))) + γ u(x) + β x V (x )F (x x) )) + γ Estimation of dynamic discrete choice models 39 / 49

42 Two Special Cases (continued) The choice probability is given by: Pr(a = 1 x) = P(x) = exp(v(x)) 1 + exp(v(x)) Note that the log of the odds-ratio is equal to the choice-specific value function: ( ) P(x) log = v(x) 1 P(x) Therefore, the expected continuation value can be expressed as a function of P(x): V p (x) = ( log (1 + exp(v(x))) + γ = log 1 + P(s) ) + γ 1 P(x) = log (1 P(x)) + γ Implication: With an absorbing state, we don t need to invert [ I β a P(a) F (a)] to apply the CCP mapping. Estimation of dynamic discrete choice models 40 / 49

43 Two-Step Estimator The objective is to estimate the structural parameters θ without repeatedly solving the DP problem Initial step: Reduced form of the model Markov transition process: ˆf (x x, a) Policy function: ˆP(a x) Constraint: Need to estimate both functions at EVERY state point x. Estimation of dynamic discrete choice models 41 / 49

44 Two-Step Estimator The objective is to estimate the structural parameters θ without repeatedly solving the DP problem Initial step: Reduced form of the model Markov transition process: ˆf (x x, a) Policy function: ˆP(a x) Constraint: Need to estimate both functions at EVERY state point x. How? Ideally ˆP(a x) is estimated non-parametrically to avoid imposing a particular functional form on the policy function (i.e. no theory involved at this stage). This would correspond to a frequency estimator: ˆP(a x) = 1 1(a i = a) n(x) i n(x) For finite samples, we need to impose smooth the policy function and interpolate between states are not visited (or infrequently). Kernels or local-polynomial techniques can be used. Estimation of dynamic discrete choice models 41 / 49

45 Two-Step Estimator The objective is to estimate the structural parameters θ without repeatedly solving the DP problem Initial step: Reduced form of the model Markov transition process: ˆf (x x, a) Policy function: ˆP(a x) Constraint: Need to estimate both functions at EVERY state point x. How? Ideally ˆP(a x) is estimated non-parametrically to avoid imposing a particular functional form on the policy function (i.e. no theory involved at this stage). This would correspond to a frequency estimator: ˆP(a x) = 1 1(a i = a) n(x) i n(x) For finite samples, we need to impose smooth the policy function and interpolate between states are not visited (or infrequently). Kernels or local-polynomial techniques can be used. Second-step: Structural parameters conditional on ( ˆP, ˆf ) Estimation of dynamic discrete choice models 41 / 49

46 Example: Linear payoff function, u(x a, θ) = x(a)θ 1- Data Preparation: Use ( ˆP, ˆF ) to calculate: Z( ˆP, ˆF ) = [ I β ˆP(a) ˆF (a) ] [ 1 a a λ( ˆP, ˆF ) = [ I β ˆP(a) F (a) ] [ 1 a a ] ˆP(a) X (a) ] ˆP(a) e(a, ˆP) 2- GMM: Let W it denote a vector of predetermined instruments (e.g. state-variables and their interactions). We can construct moment conditions: [ ]) E (W it a it Ψ(a it x it, ˆP, ˆF ) = 0 ( Where, Ψ(a it x it, ˆP, ˆF exp v(x it a it, ˆP, ˆF ) ) ) = a exp(v(x it a, ˆP, ˆF )) v(x a, ˆP, ˆF ) = x(a)θ + β x V (x ˆP, ˆF ) }{{} =Z(x, ˆP, ˆF )θ+λ(x, ˆP, ˆF ) ˆf (x x, a). v(x a, ˆP, ˆF ( ) = x(a) + β Z(x ˆP, ˆF ) ) θ + β λ(x ˆP, ˆF ) Therefore, the second-stage of problem is equivalent to a linear GMM (note: This also highlights the difficulty of identifying β separately from θ) Estimation of dynamic discrete choice models 42 / 49

47 Pseudo-likelihood estimators (PML) Source: Aguirregabiria and Mira (2002) Data: Panel of n individuals of T periods: 2-Step estimator: (A, X ) = {a it, x it } i=1,...,n;t=1,...,t 1 Obtain a flexible estimator of CCPs ˆP 1 (a x) 2 Feasible PML estimator: Q 2S (A, X ) = max Ψ(a it x it, ˆP 1, ˆF, θ) θ t If V (P) is linear, the second step is a linear probit/logit model. i Estimation of dynamic discrete choice models 43 / 49

48 Pseudo-likelihood estimators (PML) NPL estimator: The NPL repeat the PML and policy function iteration steps sequentially (i.e. swapping the fixed-point algorithm). 1 Obtain a flexible estimator of CCPs ˆP 1 (a x) 2 Feasible PML step: Q k+1 (A, X ) = max Ψ(a it x it, ˆP k, ˆF, θ) θ 3 Policy function iteration step: t i ˆP k+1 (a x) = Ψ(a x, ˆP k, ˆF, ˆθ k+1 ) 4 Stop if ˆP k+1 ˆP k < η, else repeat step 2 and 3. In the single agent case: The NPL is guaranteed to converge to the MLE estimator (i.e. NFXP). In practice, Aguirregabiria and Mira (2002) showed that 2 or 3 steps is sufficient to eliminate the small sample bias of the 2-step estimator, and is computationally easier to implement than the NFXP. Estimation of dynamic discrete choice models 44 / 49

49 Simulation-based CCP estimator Source: Hotz, Miller, Sanders, and Smith (1994) Starting point: The H&M GMM estimator suffers from a curse of dimensionality in X, since we must invert a X X matrix to evaluate the continuation value (not true for optimal-stopping models). This is less severe for NFXP estimators, since we can use the value-function mapping to solve the policy functions. Solution: First insight: We only need to know the relative choice-specific value function ṽ(a x) = v(a x) v(1 x) to predict behavior. { 1 If ṽ(a x) + ɛ(a) < 0 for all a 1 a it = a If max{0, ṽ(a x) + ɛ(a )} < ṽ(a x) + ɛ(a) for all a a Second insight: There exists a one-to-one mapping between ṽ(a x) and P(a x). Estimation of dynamic discrete choice models 45 / 49

50 Simulation-based CCP estimator Logit example: P(a x) = exp(v(a x)) a exp(v(a x)) = ṽ(a x, P) = ln P(a x) ln P(1 x) exp(ṽ(a x)) 1 + a >1 exp(ṽ(a x)) Third insight: We can approximate the model s predicted value function at any state x by simulating actions according to a policy function P(a x). ˆV S (x P) = 1 S T β τ { u(xt+τ s, at+τ s ) + e(at+τ s P(at+τ s xt+τ s )) } s τ=0 where (x s, a s ) is a simulated sequence of choices and states sampled from P(a x) and f (x x, a), and e(a P(a x)) = E(ɛ(a) a i = a, x, P) [closed-form expression]. Importantly, lim S ˆV S (x P) = V (s P). Estimation of dynamic discrete choice models 46 / 49

51 Estimation Procedure Step 1: Estimate ˆP(a x) and ˆf (x x, a), and compute the dependent variable : ṽ n (a it x it, ˆP) = ln ˆP(a it x it ) ln ˆP(1 x it ) Step 2a: Simulation of value functions of each observed state and choice (x it, a it ). Each simulated sequence calculate the value of future choices: 1 Calculate static value of (x it, a it ): u(x it, a it θ) + e(a it ˆP, x it ) 2 Sample new state for period t + 1: x it+1 ˆf (x x it, a it ) 3 Sample new choice for period t + 1: a it+1 ˆP(a x it ) Repeat steps 1-3 for T periods. This gives us the net present value of one simulated sequence: v s (a it x it, ˆP, θ) = u(x it, a it θ) + e(a it ˆP, x it+τ ) T ] + β τ u(xit+τ s, ait+τ s θ) + e(ait+τ s ˆP, xit+τ s ) τ=1 Estimation of dynamic discrete choice models 47 / 49

52 Estimation Procedure (continued) Repeat this process S times. This gives us the simulated value of choosing a it in state x it : v S (a it x it, ˆP, θ) = 1 v s (a it x it, ˆP) S Let ṽ S (a it x it, ˆP, θ) = v S (a it x it, ˆP, θ) v S (1 x it, ˆP, θ). Note: If u(x, a θ) is linear in θ, we need to do this simulation process only once. Step 2b: Moment conditions [ E (W it ṽ n (a it x it, ˆP) ṽ S (a it x it, ˆP, ]) θ) = 0 where W it is a vector of instruments. s Estimation of dynamic discrete choice models 48 / 49

53 Estimation Procedure (continued) Importantly, setting up the moment conditions this way implies that the estimate will be consistent even with a finite number of simulated number of draws S. Why? The simulation error, ṽ(a it x it, ˆP, θ) ṽ S (a it x it, ˆP, θ), is additive, and therefore vanishes as n (instead of S ). However, the small sample bias in ˆP enters non-linearly in the moment conditions, and can induce severe biases (same as before): ( ) ( ) ln ˆP(ait x it ) + u it (a) ln ˆP(1 xit ) + u it (1) ln ˆP(a it x it ) ln ˆP(1 x it )+u it For instance, if ˆP(a it x it ) = 0, the objective function is not defined. HMSS presents Monte-Carlo experiment to illustrate the small-sample bias. It can be quite large. Estimation of dynamic discrete choice models 49 / 49

54 Ackerberg, D. (2003). Advertising, learning, and consumer choice in experience good markets: A structural empirical examination. International Economic Review 44, Adda, J. and R. Cooper (2000). Balladurette and juppette: A discrete analysis of scrapping subsidies. Journal of Political Economy 108(4). Aguirregabiria, V. and P. Mira (2002). Swapping the nested fixed point algorithm: A class of estimators for discrete markov decision models. Econometrica 70(4), Crawford, G. and M. Shum (2005). Uncertainty and learning in pharmaceutical demand. Econometrica 73, Das, S. (1992). A microeconometric model of capital utilization and retirement: The case of the us cement industry. Review of Economic Studies 59, Das, S., M. Roberts, and J. Tybout (2007, May). Market entry costs, producer heterogeneity, and export dynamics. Econometrica. Erdem, T., S. Imai, and M. P. Keane (2003). Brand and quantity choice dynamics under price uncertainty. Quantitative Marketing and Economics 1, Erdem, T. and M. P. Keane (1996). Decision-making under uncertainty: Capturing dynamic brand choice processes in turbulent consumer goods markets. Marketing Science 15(1), Gordon, B. (2010, September). A dynamic model of consumer replacement cycles in the pc processor industry. Marketing Science 28(5). Estimation of dynamic discrete choice models References 49 / 49

55 Gowrisankaran, G. and M. Rysman (2012). Dynamics of consumer demand for new durable goods. Journal of Political Economy 120, Heckman, J. J. (1981). The incidental parameters problem and the problem of initial conditions in estimating a discrete time-discrete data stochastic process. In C. F. Manski and D. McFadden (Eds.), Structural Analysis of Discrete Data with Econometric Applications, pp MIT Press. Hendel, I. and A. Nevo (2006). Measuring the implications of sales and consumer stockpiling behavior. Econometrica 74(6), Hotz, V. J. and R. A. Miller (1993). Conditional choice probabilities and the estimation of dynamic models. The Review of Economic Studies 60(3), Hotz, V. J., R. A. Miller, S. Sanders, and J. Smith (1994). A simulation estimator for dynamic models of discrete choice. The Review of Economic Studies 61(2), Kasahara, H. and K. Shimotsu (2009). Nonparametric identification of finite mixture models of dynamic discrete choices. Econometrica 77(1). Lee, R. (2013, December). Vertical integration and exclusivity in platform and two-sided markets. American Economic Review 103(7), Magnac, T. and D. Thesmar (2002). Identifying dynamic discrete decision processes. Econometrica 70, Pakes, A. (1986). Patents as options: Some estimates of the value of holding european patent stocks. Econometrica: Journal of the Econometric Society 54(4), Estimation of dynamic discrete choice models References 49 / 49

56 Rust, J. (1987). Optimal replacement of gmc bus engines: An empirical model of harold zurcher. Econometrica: Journal of the Econometric Society 55(5), Rust, J. and G. Rothwell (1995). Optimal response to a shift in regulatory regime: The case of the us nuclear power industry. Journal of Applied Econometrics 10(Special Issue: The Microeconometrics of Dynamic Decision Making), S75 S118. Estimation of dynamic discrete choice models 49 / 49

Pakes (1986): Patents as Options: Some Estimates of the Value of Holding European Patent Stocks

Pakes (1986): Patents as Options: Some Estimates of the Value of Holding European Patent Stocks Pakes (1986): Patents as Options: Some Estimates of the Value of Holding European Patent Stocks Spring 2009 Main question: How much are patents worth? Answering this question is important, because it helps

More information

Unobserved Heterogeneity Revisited

Unobserved Heterogeneity Revisited Unobserved Heterogeneity Revisited Robert A. Miller Dynamic Discrete Choice March 2018 Miller (Dynamic Discrete Choice) cemmap 7 March 2018 1 / 24 Distributional Assumptions about the Unobserved Variables

More information

Lec 1: Single Agent Dynamic Models: Nested Fixed Point Approach. K. Sudhir MGT 756: Empirical Methods in Marketing

Lec 1: Single Agent Dynamic Models: Nested Fixed Point Approach. K. Sudhir MGT 756: Empirical Methods in Marketing Lec 1: Single Agent Dynamic Models: Nested Fixed Point Approach K. Sudhir MGT 756: Empirical Methods in Marketing RUST (1987) MODEL AND ESTIMATION APPROACH A Model of Harold Zurcher Rust (1987) Empirical

More information

GMM for Discrete Choice Models: A Capital Accumulation Application

GMM for Discrete Choice Models: A Capital Accumulation Application GMM for Discrete Choice Models: A Capital Accumulation Application Russell Cooper, John Haltiwanger and Jonathan Willis January 2005 Abstract This paper studies capital adjustment costs. Our goal here

More information

Chapter 3. Dynamic discrete games and auctions: an introduction

Chapter 3. Dynamic discrete games and auctions: an introduction Chapter 3. Dynamic discrete games and auctions: an introduction Joan Llull Structural Micro. IDEA PhD Program I. Dynamic Discrete Games with Imperfect Information A. Motivating example: firm entry and

More information

Lecture 17: More on Markov Decision Processes. Reinforcement learning

Lecture 17: More on Markov Decision Processes. Reinforcement learning Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture

More information

4 Reinforcement Learning Basic Algorithms

4 Reinforcement Learning Basic Algorithms Learning in Complex Systems Spring 2011 Lecture Notes Nahum Shimkin 4 Reinforcement Learning Basic Algorithms 4.1 Introduction RL methods essentially deal with the solution of (optimal) control problems

More information

EC316a: Advanced Scientific Computation, Fall Discrete time, continuous state dynamic models: solution methods

EC316a: Advanced Scientific Computation, Fall Discrete time, continuous state dynamic models: solution methods EC316a: Advanced Scientific Computation, Fall 2003 Notes Section 4 Discrete time, continuous state dynamic models: solution methods We consider now solution methods for discrete time models in which decisions

More information

Estimating a Dynamic Oligopolistic Game with Serially Correlated Unobserved Production Costs. SS223B-Empirical IO

Estimating a Dynamic Oligopolistic Game with Serially Correlated Unobserved Production Costs. SS223B-Empirical IO Estimating a Dynamic Oligopolistic Game with Serially Correlated Unobserved Production Costs SS223B-Empirical IO Motivation There have been substantial recent developments in the empirical literature on

More information

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29 Chapter 5 Univariate time-series analysis () Chapter 5 Univariate time-series analysis 1 / 29 Time-Series Time-series is a sequence fx 1, x 2,..., x T g or fx t g, t = 1,..., T, where t is an index denoting

More information

Dynamic Portfolio Choice II

Dynamic Portfolio Choice II Dynamic Portfolio Choice II Dynamic Programming Leonid Kogan MIT, Sloan 15.450, Fall 2010 c Leonid Kogan ( MIT, Sloan ) Dynamic Portfolio Choice II 15.450, Fall 2010 1 / 35 Outline 1 Introduction to Dynamic

More information

Estimating Market Power in Differentiated Product Markets

Estimating Market Power in Differentiated Product Markets Estimating Market Power in Differentiated Product Markets Metin Cakir Purdue University December 6, 2010 Metin Cakir (Purdue) Market Equilibrium Models December 6, 2010 1 / 28 Outline Outline Estimating

More information

Sequential Decision Making

Sequential Decision Making Sequential Decision Making Dynamic programming Christos Dimitrakakis Intelligent Autonomous Systems, IvI, University of Amsterdam, The Netherlands March 18, 2008 Introduction Some examples Dynamic programming

More information

A CCP Estimator for Dynamic Discrete Choice Models with Aggregate Data. Timothy Derdenger & Vineet Kumar. June Abstract

A CCP Estimator for Dynamic Discrete Choice Models with Aggregate Data. Timothy Derdenger & Vineet Kumar. June Abstract A CCP Estimator for Dynamic Discrete Choice Models with Aggregate Data Timothy Derdenger & Vineet Kumar June 2015 Abstract We present a new methodology to estimate dynamic discrete choice models with aggregate

More information

1 Dynamic programming

1 Dynamic programming 1 Dynamic programming A country has just discovered a natural resource which yields an income per period R measured in terms of traded goods. The cost of exploitation is negligible. The government wants

More information

A potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples

A potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples 1.3 Regime switching models A potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples (or regimes). If the dates, the

More information

Revenue Management Under the Markov Chain Choice Model

Revenue Management Under the Markov Chain Choice Model Revenue Management Under the Markov Chain Choice Model Jacob B. Feldman School of Operations Research and Information Engineering, Cornell University, Ithaca, New York 14853, USA jbf232@cornell.edu Huseyin

More information

Dependence Structure and Extreme Comovements in International Equity and Bond Markets

Dependence Structure and Extreme Comovements in International Equity and Bond Markets Dependence Structure and Extreme Comovements in International Equity and Bond Markets René Garcia Edhec Business School, Université de Montréal, CIRANO and CIREQ Georges Tsafack Suffolk University Measuring

More information

ADVANCED MACROECONOMIC TECHNIQUES NOTE 7b

ADVANCED MACROECONOMIC TECHNIQUES NOTE 7b 316-406 ADVANCED MACROECONOMIC TECHNIQUES NOTE 7b Chris Edmond hcpedmond@unimelb.edu.aui Aiyagari s model Arguably the most popular example of a simple incomplete markets model is due to Rao Aiyagari (1994,

More information

Obtaining Analytic Derivatives for a Class of Discrete-Choice Dynamic Programming Models

Obtaining Analytic Derivatives for a Class of Discrete-Choice Dynamic Programming Models Obtaining Analytic Derivatives for a Class of Discrete-Choice Dynamic Programming Models Curtis Eberwein John C. Ham June 5, 2007 Abstract This paper shows how to recursively calculate analytic first and

More information

Identification and Counterfactuals in Dynamic Models of Market Entry and Exit

Identification and Counterfactuals in Dynamic Models of Market Entry and Exit Identification and Counterfactuals in Dynamic Models of Market Entry and Exit Victor Aguirregabiria University of Toronto Junichi Suzuki University of Toronto October 28, 2012 Abstract This paper deals

More information

SUPPLEMENT TO EQUILIBRIA IN HEALTH EXCHANGES: ADVERSE SELECTION VERSUS RECLASSIFICATION RISK (Econometrica, Vol. 83, No. 4, July 2015, )

SUPPLEMENT TO EQUILIBRIA IN HEALTH EXCHANGES: ADVERSE SELECTION VERSUS RECLASSIFICATION RISK (Econometrica, Vol. 83, No. 4, July 2015, ) Econometrica Supplementary Material SUPPLEMENT TO EQUILIBRIA IN HEALTH EXCHANGES: ADVERSE SELECTION VERSUS RECLASSIFICATION RISK (Econometrica, Vol. 83, No. 4, July 2015, 1261 1313) BY BEN HANDEL, IGAL

More information

Identifying Dynamic Discrete Choice Models. off Short Panels

Identifying Dynamic Discrete Choice Models. off Short Panels Identifying Dynamic Discrete Choice Models off Short Panels Peter Arcidiacono Duke University & NBER Robert A. Miller Carnegie Mellon University September 8, 2017 Abstract This paper analyzes the identification

More information

Moral Hazard: Dynamic Models. Preliminary Lecture Notes

Moral Hazard: Dynamic Models. Preliminary Lecture Notes Moral Hazard: Dynamic Models Preliminary Lecture Notes Hongbin Cai and Xi Weng Department of Applied Economics, Guanghua School of Management Peking University November 2014 Contents 1 Static Moral Hazard

More information

What s New in Econometrics. Lecture 11

What s New in Econometrics. Lecture 11 What s New in Econometrics Lecture 11 Discrete Choice Models Guido Imbens NBER Summer Institute, 2007 Outline 1. Introduction 2. Multinomial and Conditional Logit Models 3. Independence of Irrelevant Alternatives

More information

MACROECONOMICS. Prelim Exam

MACROECONOMICS. Prelim Exam MACROECONOMICS Prelim Exam Austin, June 1, 2012 Instructions This is a closed book exam. If you get stuck in one section move to the next one. Do not waste time on sections that you find hard to solve.

More information

Final exam solutions

Final exam solutions EE365 Stochastic Control / MS&E251 Stochastic Decision Models Profs. S. Lall, S. Boyd June 5 6 or June 6 7, 2013 Final exam solutions This is a 24 hour take-home final. Please turn it in to one of the

More information

Stochastic Optimal Control

Stochastic Optimal Control Stochastic Optimal Control Lecturer: Eilyan Bitar, Cornell ECE Scribe: Kevin Kircher, Cornell MAE These notes summarize some of the material from ECE 5555 (Stochastic Systems) at Cornell in the fall of

More information

Identifying Long-Run Risks: A Bayesian Mixed-Frequency Approach

Identifying Long-Run Risks: A Bayesian Mixed-Frequency Approach Identifying : A Bayesian Mixed-Frequency Approach Frank Schorfheide University of Pennsylvania CEPR and NBER Dongho Song University of Pennsylvania Amir Yaron University of Pennsylvania NBER February 12,

More information

A simple wealth model

A simple wealth model Quantitative Macroeconomics Raül Santaeulàlia-Llopis, MOVE-UAB and Barcelona GSE Homework 5, due Thu Nov 1 I A simple wealth model Consider the sequential problem of a household that maximizes over streams

More information

Information aggregation for timing decision making.

Information aggregation for timing decision making. MPRA Munich Personal RePEc Archive Information aggregation for timing decision making. Esteban Colla De-Robertis Universidad Panamericana - Campus México, Escuela de Ciencias Económicas y Empresariales

More information

Identification and Estimation of Dynamic Games when Players Beliefs are not in Equilibrium

Identification and Estimation of Dynamic Games when Players Beliefs are not in Equilibrium and of Dynamic Games when Players Beliefs are not in Equilibrium Victor Aguirregabiria and Arvind Magesan Presented by Hanqing Institute, Renmin University of China Outline General Views 1 General Views

More information

Modeling of Price. Ximing Wu Texas A&M University

Modeling of Price. Ximing Wu Texas A&M University Modeling of Price Ximing Wu Texas A&M University As revenue is given by price times yield, farmers income risk comes from risk in yield and output price. Their net profit also depends on input price, but

More information

Lecture 7: Bayesian approach to MAB - Gittins index

Lecture 7: Bayesian approach to MAB - Gittins index Advanced Topics in Machine Learning and Algorithmic Game Theory Lecture 7: Bayesian approach to MAB - Gittins index Lecturer: Yishay Mansour Scribe: Mariano Schain 7.1 Introduction In the Bayesian approach

More information

The Costs of Environmental Regulation in a Concentrated Industry

The Costs of Environmental Regulation in a Concentrated Industry The Costs of Environmental Regulation in a Concentrated Industry Stephen P. Ryan MIT Department of Economics Research Motivation Question: How do we measure the costs of a regulation in an oligopolistic

More information

Solving dynamic portfolio choice problems by recursing on optimized portfolio weights or on the value function?

Solving dynamic portfolio choice problems by recursing on optimized portfolio weights or on the value function? DOI 0.007/s064-006-9073-z ORIGINAL PAPER Solving dynamic portfolio choice problems by recursing on optimized portfolio weights or on the value function? Jules H. van Binsbergen Michael W. Brandt Received:

More information

Identification and Estimation of Dynamic Games when Players Belief Are Not in Equilibrium

Identification and Estimation of Dynamic Games when Players Belief Are Not in Equilibrium Identification and Estimation of Dynamic Games when Players Belief Are Not in Equilibrium A Short Review of Aguirregabiria and Magesan (2010) January 25, 2012 1 / 18 Dynamics of the game Two players, {i,

More information

Model Estimation. Liuren Wu. Fall, Zicklin School of Business, Baruch College. Liuren Wu Model Estimation Option Pricing, Fall, / 16

Model Estimation. Liuren Wu. Fall, Zicklin School of Business, Baruch College. Liuren Wu Model Estimation Option Pricing, Fall, / 16 Model Estimation Liuren Wu Zicklin School of Business, Baruch College Fall, 2007 Liuren Wu Model Estimation Option Pricing, Fall, 2007 1 / 16 Outline 1 Statistical dynamics 2 Risk-neutral dynamics 3 Joint

More information

ADVANCED MACROECONOMIC TECHNIQUES NOTE 6a

ADVANCED MACROECONOMIC TECHNIQUES NOTE 6a 316-406 ADVANCED MACROECONOMIC TECHNIQUES NOTE 6a Chris Edmond hcpedmond@unimelb.edu.aui Introduction to consumption-based asset pricing We will begin our brief look at asset pricing with a review of the

More information

Semiparametric Estimation of a Finite Horizon Dynamic Discrete Choice Model with a Terminating Action 1

Semiparametric Estimation of a Finite Horizon Dynamic Discrete Choice Model with a Terminating Action 1 Semiparametric Estimation of a Finite Horizon Dynamic Discrete Choice Model with a Terminating Action 1 Patrick Bajari, University of Washington and NBER Chenghuan Sean Chu, Facebook Denis Nekipelov, University

More information

On modelling of electricity spot price

On modelling of electricity spot price , Rüdiger Kiesel and Fred Espen Benth Institute of Energy Trading and Financial Services University of Duisburg-Essen Centre of Mathematics for Applications, University of Oslo 25. August 2010 Introduction

More information

Reasoning with Uncertainty

Reasoning with Uncertainty Reasoning with Uncertainty Markov Decision Models Manfred Huber 2015 1 Markov Decision Process Models Markov models represent the behavior of a random process, including its internal state and the externally

More information

Short-selling constraints and stock-return volatility: empirical evidence from the German stock market

Short-selling constraints and stock-return volatility: empirical evidence from the German stock market Short-selling constraints and stock-return volatility: empirical evidence from the German stock market Martin Bohl, Gerrit Reher, Bernd Wilfling Westfälische Wilhelms-Universität Münster Contents 1. Introduction

More information

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics. Ph. D. Comprehensive Examination: Macroeconomics Fall, 2010

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics. Ph. D. Comprehensive Examination: Macroeconomics Fall, 2010 STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics Ph. D. Comprehensive Examination: Macroeconomics Fall, 2010 Section 1. (Suggested Time: 45 Minutes) For 3 of the following 6 statements, state

More information

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMSN50)

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMSN50) Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMSN50) Magnus Wiktorsson Centre for Mathematical Sciences Lund University, Sweden Lecture 5 Sequential Monte Carlo methods I January

More information

Toward A Term Structure of Macroeconomic Risk

Toward A Term Structure of Macroeconomic Risk Toward A Term Structure of Macroeconomic Risk Pricing Unexpected Growth Fluctuations Lars Peter Hansen 1 2007 Nemmers Lecture, Northwestern University 1 Based in part joint work with John Heaton, Nan Li,

More information

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics. Ph. D. Comprehensive Examination: Macroeconomics Spring, 2016

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics. Ph. D. Comprehensive Examination: Macroeconomics Spring, 2016 STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics Ph. D. Comprehensive Examination: Macroeconomics Spring, 2016 Section 1. Suggested Time: 45 Minutes) For 3 of the following 6 statements,

More information

AnEstimableDynamicModelofEntry,Exit. and Growth in Oligopoly Retail Markets

AnEstimableDynamicModelofEntry,Exit. and Growth in Oligopoly Retail Markets AnEstimableDynamicModelofEntry,Exit and Growth in Oligopoly Retail Markets Victor Aguirregabiria (corresponding author) Department of Economics University of Toronto 100 Saint George Street Toronto, Ontario,

More information

Small Sample Bias Using Maximum Likelihood versus. Moments: The Case of a Simple Search Model of the Labor. Market

Small Sample Bias Using Maximum Likelihood versus. Moments: The Case of a Simple Search Model of the Labor. Market Small Sample Bias Using Maximum Likelihood versus Moments: The Case of a Simple Search Model of the Labor Market Alice Schoonbroodt University of Minnesota, MN March 12, 2004 Abstract I investigate the

More information

1 Explaining Labor Market Volatility

1 Explaining Labor Market Volatility Christiano Economics 416 Advanced Macroeconomics Take home midterm exam. 1 Explaining Labor Market Volatility The purpose of this question is to explore a labor market puzzle that has bedeviled business

More information

Pricing Problems under the Markov Chain Choice Model

Pricing Problems under the Markov Chain Choice Model Pricing Problems under the Markov Chain Choice Model James Dong School of Operations Research and Information Engineering, Cornell University, Ithaca, New York 14853, USA jd748@cornell.edu A. Serdar Simsek

More information

Choice Models. Session 1. K. Sudhir Yale School of Management. Spring

Choice Models. Session 1. K. Sudhir Yale School of Management. Spring Choice Models Session 1 K. Sudhir Yale School of Management Spring 2-2011 Outline The Basics Logit Properties Model setup Matlab Code Heterogeneity State dependence Endogeneity Model Setup Bayesian Learning

More information

Ramsey s Growth Model (Solution Ex. 2.1 (f) and (g))

Ramsey s Growth Model (Solution Ex. 2.1 (f) and (g)) Problem Set 2: Ramsey s Growth Model (Solution Ex. 2.1 (f) and (g)) Exercise 2.1: An infinite horizon problem with perfect foresight In this exercise we will study at a discrete-time version of Ramsey

More information

M.I.T Fall Practice Problems

M.I.T Fall Practice Problems M.I.T. 15.450-Fall 2010 Sloan School of Management Professor Leonid Kogan Practice Problems 1. Consider a 3-period model with t = 0, 1, 2, 3. There are a stock and a risk-free asset. The initial stock

More information

Roy Model of Self-Selection: General Case

Roy Model of Self-Selection: General Case V. J. Hotz Rev. May 6, 007 Roy Model of Self-Selection: General Case Results drawn on Heckman and Sedlacek JPE, 1985 and Heckman and Honoré, Econometrica, 1986. Two-sector model in which: Agents are income

More information

ESTIMATION OF UTILITY FUNCTIONS: MARKET VS. REPRESENTATIVE AGENT THEORY

ESTIMATION OF UTILITY FUNCTIONS: MARKET VS. REPRESENTATIVE AGENT THEORY ESTIMATION OF UTILITY FUNCTIONS: MARKET VS. REPRESENTATIVE AGENT THEORY Kai Detlefsen Wolfgang K. Härdle Rouslan A. Moro, Deutsches Institut für Wirtschaftsforschung (DIW) Center for Applied Statistics

More information

Machine Learning for Quantitative Finance

Machine Learning for Quantitative Finance Machine Learning for Quantitative Finance Fast derivative pricing Sofie Reyners Joint work with Jan De Spiegeleer, Dilip Madan and Wim Schoutens Derivative pricing is time-consuming... Vanilla option pricing

More information

University of Toronto Department of Economics. Identification and estimation of dynamic games when players' beliefs are not in equilibrium

University of Toronto Department of Economics. Identification and estimation of dynamic games when players' beliefs are not in equilibrium University of Toronto Department of Economics Working Paper 449 Identification and estimation of dynamic games when players' beliefs are not in equilibrium By Victor Aguirregabiria and Arvind Magesan March

More information

Non-Deterministic Search

Non-Deterministic Search Non-Deterministic Search MDP s 1 Non-Deterministic Search How do you plan (search) when your actions might fail? In general case, how do you plan, when the actions have multiple possible outcomes? 2 Example:

More information

Reinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein

Reinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein Reinforcement Learning Slides based on those used in Berkeley's AI class taught by Dan Klein Reinforcement Learning Basic idea: Receive feedback in the form of rewards Agent s utility is defined by the

More information

Asset Pricing and Equity Premium Puzzle. E. Young Lecture Notes Chapter 13

Asset Pricing and Equity Premium Puzzle. E. Young Lecture Notes Chapter 13 Asset Pricing and Equity Premium Puzzle 1 E. Young Lecture Notes Chapter 13 1 A Lucas Tree Model Consider a pure exchange, representative household economy. Suppose there exists an asset called a tree.

More information

3.4 Copula approach for modeling default dependency. Two aspects of modeling the default times of several obligors

3.4 Copula approach for modeling default dependency. Two aspects of modeling the default times of several obligors 3.4 Copula approach for modeling default dependency Two aspects of modeling the default times of several obligors 1. Default dynamics of a single obligor. 2. Model the dependence structure of defaults

More information

Int. Statistical Inst.: Proc. 58th World Statistical Congress, 2011, Dublin (Session CPS001) p approach

Int. Statistical Inst.: Proc. 58th World Statistical Congress, 2011, Dublin (Session CPS001) p approach Int. Statistical Inst.: Proc. 58th World Statistical Congress, 2011, Dublin (Session CPS001) p.5901 What drives short rate dynamics? approach A functional gradient descent Audrino, Francesco University

More information

Economics 2010c: Lecture 4 Precautionary Savings and Liquidity Constraints

Economics 2010c: Lecture 4 Precautionary Savings and Liquidity Constraints Economics 2010c: Lecture 4 Precautionary Savings and Liquidity Constraints David Laibson 9/11/2014 Outline: 1. Precautionary savings motives 2. Liquidity constraints 3. Application: Numerical solution

More information

UCLA Department of Economics Ph.D. Preliminary Exam Industrial Organization Field Exam (Spring 2010) Use SEPARATE booklets to answer each question

UCLA Department of Economics Ph.D. Preliminary Exam Industrial Organization Field Exam (Spring 2010) Use SEPARATE booklets to answer each question Wednesday, June 23 2010 Instructions: UCLA Department of Economics Ph.D. Preliminary Exam Industrial Organization Field Exam (Spring 2010) You have 4 hours for the exam. Answer any 5 out 6 questions. All

More information

RECURSIVE VALUATION AND SENTIMENTS

RECURSIVE VALUATION AND SENTIMENTS 1 / 32 RECURSIVE VALUATION AND SENTIMENTS Lars Peter Hansen Bendheim Lectures, Princeton University 2 / 32 RECURSIVE VALUATION AND SENTIMENTS ABSTRACT Expectations and uncertainty about growth rates that

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS Answer any FOUR of the SIX questions.

More information

Part 1: q Theory and Irreversible Investment

Part 1: q Theory and Irreversible Investment Part 1: q Theory and Irreversible Investment Goal: Endogenize firm characteristics and risk. Value/growth Size Leverage New issues,... This lecture: q theory of investment Irreversible investment and real

More information

Dynamic Portfolio Execution Detailed Proofs

Dynamic Portfolio Execution Detailed Proofs Dynamic Portfolio Execution Detailed Proofs Gerry Tsoukalas, Jiang Wang, Kay Giesecke March 16, 2014 1 Proofs Lemma 1 (Temporary Price Impact) A buy order of size x being executed against i s ask-side

More information

Making Complex Decisions

Making Complex Decisions Ch. 17 p.1/29 Making Complex Decisions Chapter 17 Ch. 17 p.2/29 Outline Sequential decision problems Value iteration algorithm Policy iteration algorithm Ch. 17 p.3/29 A simple environment 3 +1 p=0.8 2

More information

Log-linear Modeling Under Generalized Inverse Sampling Scheme

Log-linear Modeling Under Generalized Inverse Sampling Scheme Log-linear Modeling Under Generalized Inverse Sampling Scheme Soumi Lahiri (1) and Sunil Dhar (2) (1) Department of Mathematical Sciences New Jersey Institute of Technology University Heights, Newark,

More information

Chapter 7: Estimation Sections

Chapter 7: Estimation Sections 1 / 40 Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: Chapter 7 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods:

More information

Contagion models with interacting default intensity processes

Contagion models with interacting default intensity processes Contagion models with interacting default intensity processes Yue Kuen KWOK Hong Kong University of Science and Technology This is a joint work with Kwai Sun Leung. 1 Empirical facts Default of one firm

More information

All Investors are Risk-averse Expected Utility Maximizers. Carole Bernard (UW), Jit Seng Chen (GGY) and Steven Vanduffel (Vrije Universiteit Brussel)

All Investors are Risk-averse Expected Utility Maximizers. Carole Bernard (UW), Jit Seng Chen (GGY) and Steven Vanduffel (Vrije Universiteit Brussel) All Investors are Risk-averse Expected Utility Maximizers Carole Bernard (UW), Jit Seng Chen (GGY) and Steven Vanduffel (Vrije Universiteit Brussel) First Name: Waterloo, April 2013. Last Name: UW ID #:

More information

Location, Productivity, and Trade

Location, Productivity, and Trade May 10, 2010 Motivation Outline Motivation - Trade and Location Major issue in trade: How does trade liberalization affect competition? Competition has more than one dimension price competition similarity

More information

Markov Decision Processes (MDPs) CS 486/686 Introduction to AI University of Waterloo

Markov Decision Processes (MDPs) CS 486/686 Introduction to AI University of Waterloo Markov Decision Processes (MDPs) CS 486/686 Introduction to AI University of Waterloo Outline Sequential Decision Processes Markov chains Highlight Markov property Discounted rewards Value iteration Markov

More information

IEOR E4703: Monte-Carlo Simulation

IEOR E4703: Monte-Carlo Simulation IEOR E4703: Monte-Carlo Simulation Other Miscellaneous Topics and Applications of Monte-Carlo Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com

More information

Supplemental Online Appendix to Han and Hong, Understanding In-House Transactions in the Real Estate Brokerage Industry

Supplemental Online Appendix to Han and Hong, Understanding In-House Transactions in the Real Estate Brokerage Industry Supplemental Online Appendix to Han and Hong, Understanding In-House Transactions in the Real Estate Brokerage Industry Appendix A: An Agent-Intermediated Search Model Our motivating theoretical framework

More information

Optimal investments under dynamic performance critria. Lecture IV

Optimal investments under dynamic performance critria. Lecture IV Optimal investments under dynamic performance critria Lecture IV 1 Utility-based measurement of performance 2 Deterministic environment Utility traits u(x, t) : x wealth and t time Monotonicity u x (x,

More information

Application of MCMC Algorithm in Interest Rate Modeling

Application of MCMC Algorithm in Interest Rate Modeling Application of MCMC Algorithm in Interest Rate Modeling Xiaoxia Feng and Dejun Xie Abstract Interest rate modeling is a challenging but important problem in financial econometrics. This work is concerned

More information

A Simple and Robust Estimator for Discount Factors in Optimal Stopping Dynamic Discrete Choice Models

A Simple and Robust Estimator for Discount Factors in Optimal Stopping Dynamic Discrete Choice Models A Simple and Robust Estimator for Discount Factors in Optimal Stopping Dynamic Discrete Choice Models Øystein Daljord, Denis Nekipelov & Minjung Park April 10, 2018 Abstract We propose a simple and robust

More information

Equity correlations implied by index options: estimation and model uncertainty analysis

Equity correlations implied by index options: estimation and model uncertainty analysis 1/18 : estimation and model analysis, EDHEC Business School (joint work with Rama COT) Modeling and managing financial risks Paris, 10 13 January 2011 2/18 Outline 1 2 of multi-asset models Solution to

More information

Course information FN3142 Quantitative finance

Course information FN3142 Quantitative finance Course information 015 16 FN314 Quantitative finance This course is aimed at students interested in obtaining a thorough grounding in market finance and related empirical methods. Prerequisite If taken

More information

Characterization of the Optimum

Characterization of the Optimum ECO 317 Economics of Uncertainty Fall Term 2009 Notes for lectures 5. Portfolio Allocation with One Riskless, One Risky Asset Characterization of the Optimum Consider a risk-averse, expected-utility-maximizing

More information

Implementing an Agent-Based General Equilibrium Model

Implementing an Agent-Based General Equilibrium Model Implementing an Agent-Based General Equilibrium Model 1 2 3 Pure Exchange General Equilibrium We shall take N dividend processes δ n (t) as exogenous with a distribution which is known to all agents There

More information

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics. Ph. D. Preliminary Examination: Macroeconomics Fall, 2009

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics. Ph. D. Preliminary Examination: Macroeconomics Fall, 2009 STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics Ph. D. Preliminary Examination: Macroeconomics Fall, 2009 Instructions: Read the questions carefully and make sure to show your work. You

More information

Opening Secondary Markets: A Durable Goods Oligopoly with Transaction Costs

Opening Secondary Markets: A Durable Goods Oligopoly with Transaction Costs Opening Secondary Markets: A Durable Goods Oligopoly with Transaction Costs Jiawei Chen Department of Economics UC-Irvine Susanna Esteban Department of Economics Universidad Carlos III de Madrid Matthew

More information

Bivariate Birnbaum-Saunders Distribution

Bivariate Birnbaum-Saunders Distribution Department of Mathematics & Statistics Indian Institute of Technology Kanpur January 2nd. 2013 Outline 1 Collaborators 2 3 Birnbaum-Saunders Distribution: Introduction & Properties 4 5 Outline 1 Collaborators

More information

Lecture 10: Point Estimation

Lecture 10: Point Estimation Lecture 10: Point Estimation MSU-STT-351-Sum-17B (P. Vellaisamy: MSU-STT-351-Sum-17B) Probability & Statistics for Engineers 1 / 31 Basic Concepts of Point Estimation A point estimate of a parameter θ,

More information

Labor Migration and Wage Growth in Malaysia

Labor Migration and Wage Growth in Malaysia Labor Migration and Wage Growth in Malaysia Rebecca Lessem October 4, 2011 Abstract I estimate a discrete choice dynamic programming model to calculate how wage differentials affected internal migration

More information

AMH4 - ADVANCED OPTION PRICING. Contents

AMH4 - ADVANCED OPTION PRICING. Contents AMH4 - ADVANCED OPTION PRICING ANDREW TULLOCH Contents 1. Theory of Option Pricing 2 2. Black-Scholes PDE Method 4 3. Martingale method 4 4. Monte Carlo methods 5 4.1. Method of antithetic variances 5

More information

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 59

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 59 Chapter 5 Univariate time-series analysis () Chapter 5 Univariate time-series analysis 1 / 59 Time-Series Time-series is a sequence fx 1, x 2,..., x T g or fx t g, t = 1,..., T, where t is an index denoting

More information

Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration

Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Piyush Rai CS5350/6350: Machine Learning November 29, 2011 Reinforcement Learning Supervised Learning: Uses explicit supervision

More information

Part A: Questions on ECN 200D (Rendahl)

Part A: Questions on ECN 200D (Rendahl) University of California, Davis Date: September 1, 2011 Department of Economics Time: 5 hours Macroeconomics Reading Time: 20 minutes PRELIMINARY EXAMINATION FOR THE Ph.D. DEGREE Directions: Answer all

More information

EE266 Homework 5 Solutions

EE266 Homework 5 Solutions EE, Spring 15-1 Professor S. Lall EE Homework 5 Solutions 1. A refined inventory model. In this problem we consider an inventory model that is more refined than the one you ve seen in the lectures. The

More information

Lecture 2: Making Good Sequences of Decisions Given a Model of World. CS234: RL Emma Brunskill Winter 2018

Lecture 2: Making Good Sequences of Decisions Given a Model of World. CS234: RL Emma Brunskill Winter 2018 Lecture 2: Making Good Sequences of Decisions Given a Model of World CS234: RL Emma Brunskill Winter 218 Human in the loop exoskeleton work from Steve Collins lab Class Structure Last Time: Introduction

More information

Consumption and Portfolio Decisions When Expected Returns A

Consumption and Portfolio Decisions When Expected Returns A Consumption and Portfolio Decisions When Expected Returns Are Time Varying September 10, 2007 Introduction In the recent literature of empirical asset pricing there has been considerable evidence of time-varying

More information

All Investors are Risk-averse Expected Utility Maximizers

All Investors are Risk-averse Expected Utility Maximizers All Investors are Risk-averse Expected Utility Maximizers Carole Bernard (UW), Jit Seng Chen (GGY) and Steven Vanduffel (Vrije Universiteit Brussel) AFFI, Lyon, May 2013. Carole Bernard All Investors are

More information

Choice Probabilities. Logit Choice Probabilities Derivation. Choice Probabilities. Basic Econometrics in Transportation.

Choice Probabilities. Logit Choice Probabilities Derivation. Choice Probabilities. Basic Econometrics in Transportation. 1/31 Choice Probabilities Basic Econometrics in Transportation Logit Models Amir Samimi Civil Engineering Department Sharif University of Technology Primary Source: Discrete Choice Methods with Simulation

More information

Implementing Models in Quantitative Finance: Methods and Cases

Implementing Models in Quantitative Finance: Methods and Cases Gianluca Fusai Andrea Roncoroni Implementing Models in Quantitative Finance: Methods and Cases vl Springer Contents Introduction xv Parti Methods 1 Static Monte Carlo 3 1.1 Motivation and Issues 3 1.1.1

More information