Parameterized Expectations A Brief Introduction Craig Burnside Duke University November 2006 Craig Burnside (Duke University) Parameterized Expectations November 2006 1 / 10
Parameterized Expectations Due to den Haan and Marcet (1990), and described in more detail by Marcet and Marshall (1994). Approximate conditional expectations in Euler equations by parameterized functions of the state variables. Justi ed by the fact that the true conditional expectation, given information at date t, is a function of information available at time t. Approximating functions are chosen to best t the Euler equations. Craig Burnside (Duke University) Parameterized Expectations November 2006 2 / 10
he Asset Pricing Model We want a V such that Z V (x) = β exp(αy)[v (y) + 1]f (yjx)dy. (1) he integral on the right hand side of this equation is a function of x. Approximate the integral by a function ψ(x, κ), where κ is a vector of parameters that determine the speci c function ψ within a set of functions. e.g. work with second order polynomials, and let κ be the vector of parameters de ning the polynomial In the asset pricing example parameterizing the integral is equivalent to parameterizing the solution function but this will not always be true. Craig Burnside (Duke University) Parameterized Expectations November 2006 3 / 10
Choosing the Approximating Function In their applications den Haan and Marcet (1990) suggest using for some nite N. ψ(x, κ) = exp(κ 0 + κ 1 x + κ 2 x 2 + + κ N x N ), Problem: if x is normally distributed, E [ψ(x, κ)] for N 3 and a restriction needs to be placed on the size of κ 2 for N = 2. herefore probably better to use simple polynomials ψ(x, κ) = κ 0 + κ 1 x + κ 2 x 2 + + κ N x N or linear combinations of the orthogonal polynomials corresponding to the normal distribution ψ(x, κ) = κ 0 ψ 0 (x) + κ 1 ψ 1 (x) + κ 2 ψ 2 (x) + + κ N ψ N (x) Craig Burnside (Duke University) Parameterized Expectations November 2006 4 / 10
Obtaining the Best Value of κ Setting up the Algorithm Simulate a time series of x t assuming the law of motion x t = µ(1 ρ) + ρx t 1 + ɛ t, where jρj < 1 and ɛ t Niid(0, σ 2 ). ake an arbitrary value of κ and approximate the RHS integral in the Euler equation by ψ(x t, κ) this generates a V t series V t (κ) = ψ(x t, κ). De ne φ (x, V ) = β exp(αx) (V + 1). Corresponding to fv t g generate a sequence fφ t g with the property that φ t = φ[x t, V t (κ)] = β exp(αx t )[V t (κ) + 1]. Craig Burnside (Duke University) Parameterized Expectations November 2006 5 / 10
Obtaining the Best Value of κ Relationship of the Approximation to the rue Solution Consider the fact that the exact integral on the RHS of (1) is a function ψ(x); i.e. ψ(x t ) = E t φ(x t+1, V t+1 ). Since conditional expectations are also minimum mean-squared error predictors it is also true (loosely) that ψ = arg min ψ2ψ E [φ(x t+1, V t+1 ) ψ(x t )] 2 jx t. For a proof of this fact see Hamilton (1994), p. 73. So den Haan and Marcet suggest nding a xed point of the mapping K (κ) = arg min κ E fφ[x t+1, V t+1 (κ)] ψ(x t, κ)g 2 (2) Notice that given any κ the operation on the RHS of (2) produces a new value of κ, i.e. the optimal κ given κ. Craig Burnside (Duke University) Parameterized Expectations November 2006 6 / 10
Obtaining the Best Value of κ Finding the Fixed Point Pick an initial value of κ, denoted κ 0, generate sequences x t, V t (κ 0 ) = ψ(x t, κ 0 ) and φ[x t+1, V t+1 (κ 0 )]. Find κ 1 given by κ 1 = arg min κ E fφ[x t+1, V t+1 (κ 0 )] ψ(x t, κ)g 2. Keep iterating by de ning κ n = K (κ n A re nement: 1 ) = arg min κ E fφ[x t+1, V t+1 (κ n 1 )] ψ(x t, κ)g 2. κ n = (1 λ)κ n 1 + λk (κ n 1 ) for 0 < λ < 1. Marcet and Marshall (1994) provide some convergence results. Craig Burnside (Duke University) Parameterized Expectations November 2006 7 / 10
he Nonlinear Least Squares Component of the Algorithm In most applications the exact MSE is not possible to compute, so long nite sequences of length are generated and K (κ n 1 ) = arg min κ fφ[x t+1, V t+1 (κ n 1 )] ψ(x t, κ)g 2. his makes obtaining K (κ 0 ) equivalent to performing nonlinear least squares where φ[x t+1, V t+1 (κ n 1 )] is the dependent variable and ψ(x t, κ) is the regression function. Craig Burnside (Duke University) Parameterized Expectations November 2006 8 / 10
Details of the NLLS Algorithm Part 1 he FOC for the NLLS problem described above is fφ[x t+1, V t+1 (κ n 1 )] ψ(x t, κ)g ψ(x t, κ) κ 0 = 0. Obtain the solution via a series of linear regressions. Start from an initial value κ 0. Generate the time series ỹ t = φ[x t+1, V t+1 (κ n 1 )] ψ(x t, κ 0 ) + κ 0 0 ψ(x t, κ 0 ) κ x t = ψ(x t, κ 0 ) κ Perform a linear regression of ỹ t onto x t to obtain the estimate κ 1 which will satisfy ỹ t x t 0 = κ 10 x t x t 0 Craig Burnside (Duke University) Parameterized Expectations November 2006 9 / 10
Details of the NLLS Algorithm Part 2 Writing out terms the linear regression is equivalent to κ 00 n o φ[x t+1, V t+1 (κ n 1 )] ψ(x t, κ 0 ψ(xt, κ ) 0 ) κ 0 + ψ(x t, κ 0 ) κ ψ(x t, κ 0 ) κ 0 = κ 10 ψ(x t, κ 0 ) κ ψ(x t, κ 0 ) κ 0. If this algorithm is applied recursively and converges in the sense that κ m κ m+1, the 2nd term on the LHS cancels with the RHS to yield n o φ[x t+1, V t+1 (κ n 1 )] ψ(x t, κ m ψ(xt, κ ) m ) κ 0 = 0. his is a solution to the rst-order condition for the NLLS problem, so κ m can be used as K (κ n 1 ). Craig Burnside (Duke University) Parameterized Expectations November 2006 10 / 10