Online Appendix to Dynamic factor models with macro, credit crisis of 2008

Similar documents
Properties of the estimated five-factor model

Research Memo: Adding Nonfarm Employment to the Mixed-Frequency VAR Model

Web Appendix. Are the effects of monetary policy shocks big or small? Olivier Coibion

Observation Driven Mixed-Measurement Dynamic Factor Models with an Application to Credit Risk

Forecasting the U.S. Term Structure of Interest Rates using a Macroeconomic Smooth Dynamic Factor Model

Discussion Paper No. DP 07/05

Models with Time-varying Mean and Variance: A Robust Analysis of U.S. Industrial Production

Idiosyncratic risk, insurance, and aggregate consumption dynamics: a likelihood perspective

Lecture 8: Markov and Regime

Estimation of Volatility of Cross Sectional Data: a Kalman filter approach

Cross-Sectional Distribution of GARCH Coefficients across S&P 500 Constituents : Time-Variation over the Period

Financial Econometrics Notes. Kevin Sheppard University of Oxford

Lecture 6: Non Normal Distributions

Stochastic Volatility (SV) Models

High-Frequency Data Analysis and Market Microstructure [Tsay (2005), chapter 5]

Modeling frailty-correlated defaults using many macroeconomic covariates

What drives global credit risk conditions?

Forecasting the US and Wisconsin Economies in 2018

On modelling of electricity spot price

Statistical Inference and Methods

Lecture 9: Markov and Regime

Occasional Paper. Risk Measurement Illiquidity Distortions. Jiaqi Chen and Michael L. Tindall

Predicting Defaults with Regime Switching Intensity: Model and Empirical Evidence

Decomposing the Volatility Structure of Inflation

Keywords: China; Globalization; Rate of Return; Stock Markets; Time-varying parameter regression.

Final Exam Suggested Solutions

Combining State-Dependent Forecasts of Equity Risk Premium

Banking Industry Risk and Macroeconomic Implications

STRESS TEST MODELLING OF PD RISK PARAMETER UNDER ADVANCED IRB

Forecasting Cross-Sections of Frailty-Correlated Default

Time-varying Combinations of Bayesian Dynamic Models and Equity Momentum Strategies

Credit Shocks and the U.S. Business Cycle. Is This Time Different? Raju Huidrom University of Virginia. Midwest Macro Conference

Multi-Path General-to-Specific Modelling with OxMetrics

Int. Statistical Inst.: Proc. 58th World Statistical Congress, 2011, Dublin (Session CPS001) p approach

starting on 5/1/1953 up until 2/1/2017.

Cyclicality in Losses on Bank Loans

FORECASTING THE CYPRUS GDP GROWTH RATE:

Online Appendix for Forecasting Inflation using Survey Expectations and Target Inflation: Evidence for Brazil and Turkey

Chapter 6 Forecasting Volatility using Stochastic Volatility Model

Yafu Zhao Department of Economics East Carolina University M.S. Research Paper. Abstract

Internet Appendix for Asymmetry in Stock Comovements: An Entropy Approach

User Guide of GARCH-MIDAS and DCC-MIDAS MATLAB Programs

Equity Price Dynamics Before and After the Introduction of the Euro: A Note*

Common Macro Factors and Their Effects on U.S Stock Returns

Jaime Frade Dr. Niu Interest rate modeling

Modeling Yields at the Zero Lower Bound: Are Shadow Rates the Solution?

GDP, Share Prices, and Share Returns: Australian and New Zealand Evidence

Online Appendix to Bond Return Predictability: Economic Value and Links to the Macroeconomy. Pairwise Tests of Equality of Forecasting Performance

MODELING VOLATILITY OF US CONSUMER CREDIT SERIES

Amath 546/Econ 589 Univariate GARCH Models

Risk Premia and the Conditional Tails of Stock Returns

Model Construction & Forecast Based Portfolio Allocation:

Unobserved Heterogeneity Revisited

Is there a decoupling between soft and hard data? The relationship between GDP growth and the ESI

ROM SIMULATION Exact Moment Simulation using Random Orthogonal Matrices

Estimation of Stochastic Volatility Models : An Approximation to the Nonlinear State Space Representation

Bayesian Multinomial Model for Ordinal Data

University of New South Wales Semester 1, Economics 4201 and Homework #2 Due on Tuesday 3/29 (20% penalty per day late)

Application of MCMC Algorithm in Interest Rate Modeling

I. Return Calculations (20 pts, 4 points each)

Is the Potential for International Diversification Disappearing? A Dynamic Copula Approach

Which GARCH Model for Option Valuation? By Peter Christoffersen and Kris Jacobs

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29

Improve the Estimation For Stochastic Volatility Model: Quasi-Likelihood Approach

The Impact of Macroeconomic Uncertainty on Commercial Bank Lending Behavior in Barbados. Ryan Bynoe. Draft. Abstract

Time Invariant and Time Varying Inefficiency: Airlines Panel Data

Homework Assignments for BusAdm 713: Business Forecasting Methods. Assignment 1: Introduction to forecasting, Review of regression

Volume 30, Issue 1. Samih A Azar Haigazian University

Bayesian Estimation of the Markov-Switching GARCH(1,1) Model with Student-t Innovations

VIX Fear of What? October 13, Research Note. Summary. Introduction

Modeling skewness and kurtosis in Stochastic Volatility Models

Corresponding author: Gregory C Chow,

Available online at ScienceDirect. Procedia Economics and Finance 32 ( 2015 ) Andreea Ro oiu a, *

Modeling Co-movements and Tail Dependency in the International Stock Market via Copulae

An EM-Algorithm for Maximum-Likelihood Estimation of Mixed Frequency VARs

Risk Management and Time Series

A potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples

On the Investment Sensitivity of Debt under Uncertainty

Comments on Hansen and Lunde

Small Sample Bias Using Maximum Likelihood versus. Moments: The Case of a Simple Search Model of the Labor. Market

Self-Exciting Corporate Defaults: Contagion or Frailty?

Absolute Return Volatility. JOHN COTTER* University College Dublin

Consistent estimators for multilevel generalised linear models using an iterated bootstrap

A Macro-Finance Model of the Term Structure: the Case for a Quadratic Yield Model

Applications of GCorr Macro within the RiskFrontier Software: Stress Testing, Reverse Stress Testing, and Risk Integration

Lloyds TSB. Derek Hull, John Adam & Alastair Jones

A Regime-Switching Relative Value Arbitrage Rule

The Brattle Group 1 st Floor 198 High Holborn London WC1V 7BD

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is:

Forecasting mortgages: Internet search data as a proxy for mortgage credit demand

Predictive Regressions: A Present-Value Approach (van Binsbe. (van Binsbergen and Koijen, 2009)

Unobserved Components with Stochastic Volatility in U.S. Inflation: Estimation and Signal Extraction

Solving dynamic portfolio choice problems by recursing on optimized portfolio weights or on the value function?

Macroeconometrics - handout 5

Vanguard: The yield curve inversion and what it means for investors

Forecasting mortgages: Internet search data as a proxy for mortgage credit demand

Analyzing Oil Futures with a Dynamic Nelson-Siegel Model

Random Variables and Probability Distributions

MEASURING PORTFOLIO RISKS USING CONDITIONAL COPULA-AR-GARCH MODEL

A Two-Step Estimator for Missing Values in Probit Model Covariates

Transcription:

Online Appendix to Dynamic factor models with macro, frailty, and industry effects for U.S. default counts: the credit crisis of 2008 Siem Jan Koopman (a) André Lucas (a,b) Bernd Schwaab (c) (a) VU University Amsterdam and Tinbergen Institute (d) Duisenberg school of finance (e) European Central Bank, Financial Research Division December 22, 2011

Appendix A1: estimation via importance sampling An analytical expression for the the maximum likelihood (ML) estimate of parameter vector ψ for the MiMe DFM is not available. A feasible approach to the ML estimation of ψ is the maximization of the likelihood function that is evaluated via Monte Carlo methods such as importance sampling. A short description of this approach is given below. A full treatment is presented by Durbin and Koopman (2001, Part II). The observation density function of y = (y 1,..., y T ) can be expressed by the joint density of y and f = (f 1,..., f T ) where f is integrated out, that is p(y; ψ) = p(y, f; ψ)df = p(y f; ψ)p(f; ψ)df, (A.1) where p(y f; ψ) is the density of y conditional on f and p(f; ψ) is the density of f. A Monte Carlo estimator of p(y; ψ) can be obtained by ˆp(y; ψ) = M 1 M k=1 p(y f (k) ; ψ), f (k) p(f; ψ), for some large integer M. The estimator ˆp(y; ψ) is however numerically inefficient since most draws f (k) will not contribute substantially to p(y f; ψ) for any ψ and k = 1,..., K. Importance sampling improves the Monte Carlo estimation of p(y; ψ) by sampling f from the Gaussian importance density g(f y; ψ). We can express the observation density function p(y; ψ) by p(y; ψ) = (A.2) p(y, f; ψ) p(y f; ψ) g(f y; ψ)df = g(y; ψ) g(f y; ψ) g(f y; ψ)df. g(y f; ψ) Since f is from a Gaussian density, we have g(f; ψ) = p(f; ψ) and g(y; ψ) = g(y, f; ψ) / g(f y; ψ). In case g(f y; ψ) is close to p(f y; ψ) and in case simulation from g(f y; ψ) is feasible, the Monte Carlo estimator of the likelihood function is given by p(y; ψ) = g(y; ψ)m 1 M k=1 p(y f (k) ; ψ) g(y f (k) ; ψ), f (k) g(f y; ψ), (A.3) 2

is numerically much more efficient, see Kloek and van Dijk (1978), Geweke (1989) and Durbin and Koopman (2001). The importance density g(f y; ψ) is based on an approximating, linear Gaussian state space model based on an observation equation for each y jt in (1) and given by y jt = c jt + θ jt + ε jt, ε jt N(0, h jt ), (A.4) where c jt is a known mean, θ jt is the unobserved signal and h jt is a known variance, for j = 1,..., J + N. For the normal variables y jt, the signal θ jt is equal to µ jt of (5) and the variables c jt = 0 and h jt = σ 2 j are known with j = J + 1,..., J + N. For the default counts y jt in the approximating model, we let the signal θ jt be equal to π jt of (4), with j = 1,..., J. The variables c jt and h jt for the default counts are determined such that the modes of p(f y; ψ) and g(f y; ψ) are equal, see Shephard and Pitt (1997), Durbin and Koopman (1997), and Durbin and Koopman (p. 191 195, 2001) for the details. The values for c jt and h jt are found iteratively and by means of the Kalman filter and an associated smoothing method. To simulate values from the resulting importance density g(f y; ψ) based on the approximating model (A.4), the simulation smoothing method of Durbin and Koopman (2002) can be used. For a set of M draws f (1),..., f (M) from g(f y; ψ), the evaluation of the likelihood function (A.3) via importance sampling relies on the computation of p(y f; ψ), g(y f; ψ), with f = f (k), and g(y; ψ) for k = 1,..., M. Density p(y f; ψ) is based on the model specifications in (3). Density g(y f; ψ) is based on the approximating, linear Gaussian model (A.4). Density g(y; ψ) is effectively the likelihood function of the approximating model (A.4) and can be computed via the Kalman filter, see Durbin and Koopman (2001). Testing the assumptions underlying the application of importance sampling can be carried out using the procedures proposed by e.g. Koopman, Shephard, and Creal (2009). 3

Appendix A2: estimation of latent factors Inference on the latent factors can also be based on importance sampling. In particular, it can be shown that E(f y; ψ) = f p(f y; ψ)df = f w(y, f; ψ)g(f y; ψ)df w(y, f; ψ)g(f y; ψ)df, where w(y, f; ψ) = p(y f; ψ)/g(y f; ψ). The estimation of E(f y; ψ) via importance sampling can be achieved by f = / M M w k f (k) w k, k=1 with w k = p(y f (k) ; ψ)/g(y f (k) ; ψ) and where f (k) g(f y; ψ) is obtained by simulation smoothing. The standard error of f i, the ith element of f, is denoted by s i and is computed by s 2 i = ( M k=1 where f (k) i is the ith element of f (k). w k (f (k) i k=1 / M ) ) 2 w k f i 2, k=1 Appendix A3: macro data listing and time series plots Table 1 and Figure 1 contain a listing and time series plots, respectively, of the macro data that is used for the empirical part of our analysis. Appendix A4: information criteria Table 2 reports likelihood-based information criteria. The standard AIC and BIC refer to the whole model which includes the default and macro data parts. The Bai and Ng (2002) panel criteria refer to the fit of macro factors f m t to the macro data y jt for j = J + 1,..., J + N. 4

Table 1: Macroeconomic Time Series Data The table gives a full listing of included macroeconomic time series data x t and binary indicators b t. All time series are obtained from the St. Louis Fed online database, http://research.stlouisfed.org/fred2/. Category Summary of time series in category Shortname Total no (a) Macro indicators, and business cycle conditions Industrial production index Disposable personal income ISM Manufacturing index Uni Michigan consumer sentiment New housing permits indpro dspi napm umich permit 5 (b) Labour market conditions Civilian unemployment rate Median duration of unemployment Average weekly hours index Total non-farm payrolls unrate uempmed AWHI payems 4 (c) Monetary policy and financing conditions Government bond term structure spread Federal funds rate Moody s seasoned Baa corporate bond yield Mortgage rates, 30 year 10 year treasury rate, constant maturity Credit spread corporates over treasuries gs10 fedfunds baa mortg tssprd credtsprd 6 (d) Bank lending Total Consumer Credit Outstanding Total Real Estate Loans, all banks totalsl realln 2 (e) Cost of resources PPI Fuels and related Energy PPI Finished Goods Trade-weighted U.S. dollar exchange rate ppieng ppifgs twexbmth 3 (f) Stock market returns S&P 500 yearly returns S&P 500 return volatility s p500 vola 2 22 5

Figure 1: Macroeconomic and financial time series data We present time series of yearly growth rates in macroeconomic and financial data. For a listing of the data we refer to Table 1. indpro 0.2 dspi napm umich permit 0.1 0.5 0.25 0.5 0.1 0.5 0.25 0.5 1 unrate uempmed AWHI payems gs10 1 5 5 0.25 0 0 0 0.25 5 2 fedfunds 0.5 baa mortg 0.2 totalsl realln 0.5 0.2 0 0.1 ppieng 0.2 ppifgs twexbmth 5 tssprd 2 crdtsprd 0.5 0.2 0 5 s_p500 0.3 vola 0.5 0.1 6

Table 2: Information criteria We report likelihood-based information criteria (IC) to guide our model selection. The estimation sample is from 1971Q1 to 2009Q1. Row minimum values are printed in bold. F1 F2 F3 F4 loglik -6530.2-6277.3-6182.5-6133.7 #par 37 64 91 118 AIC 13134.5 12683.2 12548.2 12505.4 BIC 13415.0 13168.1 13237.2 13398.4 Bai Ng IC 1-0.246-0.454-0.523-0.474 Bai Ng IC 2-0.239-0.440-0.502-0.446 Bai Ng IC 3-0.259-0.481-0.563-0.527 Appendix A5: macroeconomic risk factor estimates The top panel of Figure 2 presents the estimated risk factors ft m as defined in (4) and (5). We plot the estimated conditional mean of the factors, along with approximate standard error bands at a 95% confidence level. The factors are ordered row-wise from top-left to bottom-right according to their share of explained variation for the macro and financial data. The bottom panel of Figure 2 presents the shares of variation in each macroeconomic time series that can be attributed to the common macroeconomic factors. The first two macroeconomic factors load mostly on labor market, production, and interest rate data. The last two factors displayed in the top panel of Figure 2 load mostly on survey sentiment data and changes in price level indicators. The macroeconomic factors capture 27.2%, 21.3%, 11.7%, and 8.3% of the total variation in the macro data panel, respectively (68.6% in total). The estimated factor loadings reveal that all four common factors f m t tend to load more on default probabilities of firms rated investment grade rather than speculative grade. 7

Figure 2: Macroeconomic risk factor estimates The four panels present the estimated risk factors ft m as defined in (4) and (5). We present the estimated conditional mean of the factors, along with approximate standard error bands at a 95% confidence level. Details on the estimation and signal extraction methodology are available in A1 of this web appendix. Factors f m are common to the (continuous) macro and financial as well as the (discrete) default count data. 2.5 first factor f m t 0.95 std error band 2.5 second factor f m t 0.95 std error band 2.5 2.5 1970 1980 1990 2000 2010 1970 1980 1990 2000 2010 2.5 third factor f m t 0.95 std error band 2.5 fourth factor f m t 0.95 std error band 2.5 2.5 1970 1980 1990 2000 2010 1970 1980 1990 2000 2010 8

Figure 3: Macroeconomic risk factor loadings The panel indicates which share of the variation in each macro-financial time series listed in Table 1 and plotted in Figure 1 can be attributed to each factor in f m as plotted in Figure 2. 9

Appendix A6: macro principal components Figure 4 plots the first four principal components from our macro data. Missing values are set to zero after standardization. The principal components change little when SLO bank lending standards are added to the panel as an additional explanatory variable. Figure 4: Principal components of macro data We plot the first four principal components from the macro data listed in Table 1. We also plot the principal components for the case when bank lending standards are added to the panel. 2 1st principal component 1st PC with SLO survey 2 2nd principal component 2nd PC with SLO survey 0 0 2 2 1970 1980 1990 2000 2 3rd principal component 3rd PC with SLO survey 1970 1980 1990 2000 2 4th principal component 4th PC with SLO survey 0 0 2 2 1970 1980 1990 2000 1970 1980 1990 2000 10

Appendix A7: graphs to illustrate model fit Figure 5 presents the model-implied economy-wide default rate against the aggregate observed rates. We distinguish four specifications with (a) no factors, (b) f m t only, (c) f m t, f d t, and (d) all factors f m t, f d t, f i t. Based on these specifications, we assess the goodness of fit achieved at the aggregate level when adding latent factors. The static model fails to capture the observed default clustering around recession periods. The changes in the default rate for the static model are due to changes in the composition and quality of the rated universe. Such changes are captured by the rating and industry specific intercepts in the model. The upper-right panel indicates that the inclusion of macro variables helps to explain default rate variation. The latent frailty dynamics given by ft d, however, are clearly required for a good model fit. This holds both in low default periods such as 2002-2007, as well as in high default periods such as 1991. The bottom graphs of Figure 5 indicate that industry-specific developments cancel out in the cross-section to some extent and can thus be diversified. As a result, they may matter less from a (fully diversified) portfolio perspective. 11

Figure 5: Model fit to observed aggregate default rate Each panel plots the observed quarterly default rate for all rated firms against the default rate implied by different model specifications. The models feature either (a) no factors, (b) only macro factors f m, (c) macro factors and a frailty component f m, f d, and (d) all factors f m, f d, f i, respectively. 15 model implied, no factors actual rate 15 model implied, f m only actual rate 10 10 05 05 1980 1985 1990 1995 2000 2005 2010 1980 1985 1990 1995 2000 2005 2010 15 model implied, f m, f d actual rate 15 model implied, f m, f d, f i actual rate 10 10 05 05 1980 1985 1990 1995 2000 2005 2010 1980 1985 1990 1995 2000 2005 2010 12

Appendix A8: prediction error diagnostics We report residual diagnostics for one step ahead prediction errors that pertain to the default count panel data. We define a time series of prediction errors as ( J ˆr t t 1 = j=1 δ jt ) 1 J j=1 δ jt ( yjt ˆπ jt t 1 k jt ), (A.5) where y jt are the quarterly default counts, k jt are the respective number of firms at risk at the beginning of quarter t, j = 1,..., J, the indicator function δ jt is equal to one if k jt > 0 and zero otherwise, and ˆπ jt t 1 is the one step ahead predicted pd for firms in cross section j. For prediction, parameter estimates are kept at full sample values for computational reasons, while risk factor estimates are obtained from an expanding window that contains data up to and including t 1. Figure 6 reports (a) mean prediction errors ˆr t t 1 over time, (b) squared prediction errors ˆr t t 1 2, (c) a QQ plot of the prediction errors against the normal, (d) an error histogram and associated density kernel estimate, as well as (e) the autocorrelation function (ACF) pertaining to errors and squared errors, respectively. Overall, the errors are zero on average and roughly standard normally distributed. Some larger deviations of observed from predicted values occur in the recession periods of 1991 and 2008-09. We note some leftover autocorrelation at the fourth lag in the error ACF. Overall, however, error autocorrelation does not seem to be a major issue. The autocorrelation at the fourth lag, along with some larger residuals in 1991 and 2008-09, both disappear if we base our diagnostics on deviations implied by smoothed (full sample) risk factor estimates. 13

Figure 6: Residual diagnostics We report (a) prediction errors ˆr t t 1 over time, (b) squared prediction errors ˆr t t 1 2, (c) a QQ plot of the prediction errors against the normal, (d) an error histogram and associated density kernel estimate, (e) an estimate of the prediction error autocorrelation function, and (f) the autocorrelation function for squared errors. 0.3 0.2 0.1 residuals 75 squared residuals 50 0.1 25 1975 1980 1985 1990 1995 2000 2005 2010 1975 1980 1985 1990 1995 2000 2005 2010 QQ plot Density 0.3 1 QQ plot residuals normal density residuals N(s=641) 0.2 7.5 0.1 0.1 5.0 2.5 0.15 0.10 5 0 5 0.10 0.15 1.0 ACF residuals 0.5 1.00 0.75 0.3 0.2 0.1 0 0.1 0.2 0.3 ACF squared residuals 0.50 0.5 0.25 0 5 10 0 5 10 14

Appendix A9: frailty effects and macro data Figure 7 compares three different estimates of the frailty factor ft d. The estimates are based on the same econometric specification but take in different macro data: (i) the original panel data as described in the paper, (ii) the original data stacked with its six months lagged values, doubling its cross sectional dimension, and (iii) macro data that replaces annual with quarterly growth rates. The frailty factor estimate is fairly robust to such changes and leading/lagging the macro data. This suggests that frailty is not caused by such timing effects. The reason why timing is not very important may be that such timing effects are captured indirectly in a static factor structure, see Stock and Watson (2002). In a linear Gaussian factor model, the static factors can be interpreted as a rotated version of current and lagged structural factors. 15

Figure 7: Frailty factor estimates for different macro data panels We plot three conditional mean estimates of the frailty factor based on our favorite specification. model, however, now takes in a different transformation of the macro data: the original macro panel (22 covariates), the original panel stacked with its six months lagged values (44 covariates), and the original panel that replaces annual growth rates with quarterly rates. Each frailty factor, as reported in paper frailty factor, with X t contemporaneous and six months lagged frailty factor, X t qoq 2 1 0 1 2 3 1970 1975 1980 1985 1990 1995 2000 2005 16

Appendix A10: (un)conditional loss distributions Section 4 considers a portfolio of short-term (rolling) loans to all Moody s rated U.S. firms. Loans are extended at the beginning of each quarter during 1981Q1 and 2008Q4 at no interest. A non-defaulting loan is re-extended after three months. The loan exposure to each firm at time t is given by the inverse of the total number of firms at that time, that is ( j k jt) 1. This implies that the total credit portfolio value is 1$ at all times. For simplicity, we assume a stressed loss-given-default of 80%. This example portfolio is stylized in many regards. Nevertheless, it allows us to investigate the importance of macroeconomic, frailty, and industry-specific dynamics for the risk measurement of a diversified loan or bond portfolio. It is straightforward to simulate the portfolio credit loss distribution and associated risk measures for arbitrary credit portfolios in such a setting. First, the exposures k jt are chosen to correspond to the portfolio exposures. Second, one uses the estimation methods detailed in this appendix to estimate the current position of the latent systematic risk factors. Third, one can use the transition equation (2) directly to simulate future risk factor realizations. Finally, conditional on the risk factor path, the defaults can be simulated by combining (3) and (4). Term structures of default rates can easily be obtained by combining model-implied quarterly probabilities over time. 17

References Bai, J. and S. Ng (2002). Determining the number of factors in approximate factor models. Econometica 70(1), 191 221. Durbin, J. and S. J. Koopman (1997). Monte Carlo Maximum Likelihood estimation for non- Gaussian State Space Models. Biometrika 84(3), 669 684. Durbin, J. and S. J. Koopman (2001). Time Series Analysis by State Space Methods. Oxford: Oxford University Press. Durbin, J. and S. J. Koopman (2002). A simple and efficient simulation smoother for state space time series analysis. Biometrika 89(3), 603 616. Geweke, J. (1989). Bayesian inference in econometric models using Monte Carlo integration. Econometrica 57, 1317 39. Kloek, T. and H. K. van Dijk (1978). Bayesian estimates of equation system parameters: an application of integration by Monte Carlo. Econometrica 46, 1 20. Koopman, S. J., N. Shephard, and D. Creal (2009). Testing the assumptions behind importance sampling. Journal of Econometrics 149(1), 2 10. Shephard, N. and M. K. Pitt (1997). Likelihood analysis of non-gaussian measurement time series. Biometrika 84, 653 67. Stock, J. and M. Watson (2002). Forecasting Using Principal Components From a Large Number of Predictors. Journal of the American Statistical Association 97(460), 1167 1179. 18