Cyclicality in Losses on Bank Loans
|
|
- Louise Elliott
- 5 years ago
- Views:
Transcription
1 Cyclicality in Losses on Bank Loans Bart Keijsers Bart Diris Erik Kole Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam May 9, 2014 Abstract Cyclicality in the losses of bank loans is important for bank risk management and the resulting credit supply. Because loans have a different risk profile than bonds, evidence of cyclicality in bond losses need not apply to loans. Based on unique data we show that the default rate and loss given default of bank loans share a cyclical component, correlated with the business cycle. We infer this cycle by a new model that distinguishes loans with large and small losses. The losses within the groups stay constant, but the fraction of loans with large losses increases during downturns. Keywords: Loss-given-default, default rates, credit risk, capital requirements, dynamic factor models JEL classification: C32, C58, G21, G33 The authors thank NIBC Bank, in particular Michel van Beest, for providing access to the PECDC data. The opinions expressed in this article are the authors own and do not reflect the view of NIBC Bank or the PECDC. Corresponding author. Address: Burg. Oudlaan 50, Room H08-11, P.O. Box 1738, 3000DR Rotterdam, The Netherlands, Tel address keijsers@ese.eur.nl. 1
2 1 Introduction The recent subprime credit crisis and European sovereign debt crisis have put the risk management of banks in the spotlights again. On one hand, regulators have imposed stricter capital requirements on banks. On the other hand, politicians and society in general are pressing banks to keep on supplying loans to small and medium enterprises (SMEs), which are much more dependent on bank loans than large companies. Risk models should therefore give an accurate estimate of the risks related to bank loans, which requires a good understanding of the characteristics of losses on bank loans. In this paper, we empirically investigate the losses on bank loans. The loss of a portfolio of loans is typically split into three quantities: the default rate, the loss given default and the exposure at default. The first two elements are typically treated as outcomes of random processes, while the third is taken as given. We propose a model in which the default rate and the loss given default share a common latent component. We use this model to analyze a unique sample of defaulted bank loans. This setup allows us to analyze whether such a component shows cyclical behavior, how both processes depend on it, and whether it is related to macro variables and the business cycle. The answers to these questions are important for designing and evaluating the risk management systems of banks, in particular regarding the requirement in Basel II that loss given default should reflect economic downturn conditions where necessary to capture the relevant risks. Basel Committee on Banking Supervision (2005) makes a specific reference to cyclical behavior of loss given default. We focus on bank loans in contrast to bonds, because they have been relatively little studied and differ from bonds for multiple reasons. Banks monitor their loans more closely than bond owners can, which influences both the default rate and the loss given default. Bank loans are often more senior than other forms of credit and are likely more often backed by collateral, which reduces the loss given default. Finally, banks may be able to postpone the sale of a borrower s assets until a favorable economic state hoping to receive a higher price. These effects can make the default rate and the loss given default less cyclical and less interrelated. 2
3 Research on bank loans has been more scarce than research on bonds, because data on defaulted bank loans are not easily available and typically constitute small samples (Grunert and Weber, 2009). The evidence we provide is based on default data from the Pan-European Credit Database Consortium (PECDC). Several banks formed this consortium in 2004 to pool anonymized information on their defaulted loans for research on credit risk. Currently, the consortium counts 39 banks, not all of them European. Each member has access to a subsample of the database. Through NIBC Bank (NIBC), a Dutch bank, we have access to approximately 13,000 defaults over the period As a consequence, our research is based on a larger sample and broader cross section than existing research such as Grunert and Weber (2009), Calabrese and Zenga (2010) and Hartmann-Wendels et al. (2014). They either used a smaller sample or focused on a single country. To capture time and cross-sectional variation in default rates and loss given default, as well as the dependence between them, we construct a model that consists of three components. As the first component, we treat the default of a company on a loan as the outcome of a Bernoulli random variable. The second component relates to the loss given default. An initial inspection of our data shows that the loss given default has a clear bimodal distribution: either most of the loan is recovered, or most of the loan is lost. In contrast to bonds the loss given default can exceed 100% or fall below 0%. In the first case, the bank loses more than the initial loan, for example because of principal advances which means that the bank lends an additional amount to the borrower for recovery. In the second case, the bank recovers more than the initial loan, for example because it is entitled to penalty fees, additional interest or because it can sell the collateral for a high price. Therefore, we model the loss given default as the realization of a normally distributed random variable with either a low mean, when the loan is a good one, or a high mean, when the loan is a bad one. The loan being good or bad is determined by a second, latent, Bernoulli variable. The parameters of the Bernoulli variables can vary according to observable characteristics of loans, such as seniority, security, and the industry to which the company belongs, and they can show common time-variation because of a latent factor. 3
4 This latent factor constitutes the third part of our model, and follows an autoregressive process. Our model is a state space model with non-linear and non-gaussian measurement equations. It is closely related to the mixed-measurement dynamic factor model of Creal et al. (2014). The main difference is the component for loss given default. Creal et al. (2014) use a standard Beta distribution for the loss given default, while we propose a mixture of normal distributions. Their market-implied loss given default of bonds is bounded between zero and one, contrary to the bank loans in our study. Further, they include macroeconomic variables in their model, while we compare them after estimation of the factor. Our model is also related to Bruche and González-Aguado (2010), who use a Beta distribution to describe the loss given default. They model default rates of bonds and their loss given default as jointly dependent on a latent Markov chain. Switches in their Markov chain give rise to a credit cycle. We model the latent component as an autoregressive process, because the inferred process can then be easier linked to macroeconomic variables at different leads and lags and gives a more granular view of the credit cycle. Because our model is not a standard linear Gaussian state-space model, we cannot use the Kalman filter to infer the latent process and use straightforward maximum likelihood estimation to determine the parameters of our model. Instead, we show how the simulation-based methods of Jungbacker and Koopman (2007) can be used to infer the latent process, and the Expectation-Maximization algorithm of Dempster et al. (1977) to estimate the parameters. We consider various alternative specifications to test for cross-sectional differences in the time variation of the losses. Our results show that default rates and loss given default share a common component for bank loans. The component shows cyclical behavior with values between 7% and 15%, and leads to default rates that fluctuate between 0.5% and 4%, while loss given default fluctuates between 10% and 17%. High values for the common component indicate a bad credit environment with high default rates and high values for loss given default. The cyclical component shows a considerable negative contemporaneous correlation of 0.4 4
5 with the growth rate of European GDP and Bloomberg s Financial Condition Index. Low GDP growth and deterioration of financial conditions coincide with bad credit conditions. Interestingly, the credit cycle that we infer leads the European unemployment rate by four quarters. We conclude that the losses on bank loans show a joint component that is related to the macroeconomy. We also find that the time-variation in loss given default can be explained by variation in the probability of a defaulted loan being good or bad. We do not find evidence that the loss given default increases for all defaulted loans, or that one of the two groups shows a severe deterioration in their recovery. This finding can help banks in monitoring bank loans. When the credit cycle deteriorates, they should try to determine which loans may end up badly, as the proportion of bad to good losses is the main source for the time variation of the loss given default. Our findings contribute to the existing literature on credit risk in two ways. First, we show that the loss given default on bank loans has properties that differ from bonds. While loss given default is commonly modeled by a Beta distribution (see e.g., Bruche and González-Aguado, 2010; Creal et al., 2013), for bank loans a mixture of normal distributions is more suitable. Second, our study shows that just as for bonds, the losses on bank loans have a cyclical component that influences both their default rate and loss given default. Altman et al. (2005), Allen and Saunders (2003) and Schuermann (2004) document such a component for bonds, whereas Bruche and González-Aguado (2010) and Creal et al. (2014) show how this common component can be modeled. We complement these papers by our evidence for bank loans and a model that is specifically tailored to it. The remainder of this paper is structured as follows. In Section 2 we discuss the PECDC and the data we obtain from it. In Section 3 we propose our model. We discuss the analysis and estimation in Section 4. Section 5 contains the results and Section 6 concludes. In the appendices to this paper we provide more detailed information on our data and the methodology. 5
6 2 Data In general, not much bank loan default data is available. Defaults do not occur often and take time to resolve. Additionally, information on losses is not transparent and details are only known to the parties involved. In 2004, several banks cooperated to form the Pan-European Credit Database Consortium (PECDC), a cross border initiative to help measure credit risk to support statistical research for Basel II. The members pool their data to create a large (anonymous) database with resolved defaults. A resolved default is a default that is no longer in the recovery process and thus the final loss given default (LGD) is known. Every member has a subsample of the database, depending on the loans supplied, to ensure anonymity. We have been granted access to NIBC s subset of this large LGD database. The database available to NIBC contains 19,366 counterparties and 40,406 loans 1. Below we elaborate on the details of the dataset. 2.1 Loss Given Default In case of a default, the lender can incur losses, because the borrower is unable to meet its obligations. The LGD measures this loss and is defined as the loss relative to the exposure at default (EAD). Hence, a value of 0 indicates no loss and an LGD of 1 a total loss. The LGD in the PECDC database is the economic LGD, based on cash flows over a longer period of time. The cash flows need to be discounted to get the economic LGD. We follow industry practice by applying a discount rate that is a combination of the risk free rate and a spread over the base rate. The spread is the extra basis points paid for the loan. Although LGD is expected to be between 0 and 1, it is possible to have an LGD outside this range, e.g. due to principal advances or a profit on the sale of assets. Figure 1 presents the bimodal empirical distribution, and shows that values outside the [0, 1] interval are observed. Abnormally high or low values are excluded. Additionally, we 1 Version December
7 apply other filters, following mostly NIBC s internal policy (Van Beest, 2008), to exclude extraneous observations. For more details, see appendix A.1. [Figure 1 about here.] The PECDC LGD database for resolved defaults contains details of defaults from 1983 to However, these are not all the defaults of these years. The first data has been delivered by the banks in 2005, including defaults from the past. Most observations in the years before 2000 are the substantial losses still in the books that took a long time to resolve. Not all banks might have databases containing all relevant data of many years ago. This implies that the early years do not form a representative sample for that period Workout period The main difference between bank loan and bond defaults is the workout period. Bond holders directly observe a drop in value. With defaulted bank loans, a recovery process starts that should lead to debt repayment. At the end of the recovery process, when no more cash can be obtained, the default is resolved. The period from the default date to the resolved date is called the workout period. Most defaults are resolved within 1 to 3 years after default, but figure 1 shows that the recovery process can last more than 5 years. The average workout is 1.38 and the median workout is 1.02 years. Figure 2 shows the buckets with corresponding average LGD, where buckets correspond to subsamples depending on the length of the workout period in years. The LGD is significantly higher for longer workout periods. Part of the LGD difference is explained by the time value of money. The cash flows are discounted at a higher rate for a longer workout period, thus reducing the recovery and increasing the LGD. The workout period is further an indication of how hard it is to recover from a default. Banks prefer fast recoveries due to the time value of money. If the recovery takes time, it can be due to issues with restructuring or selling of the assets. If demand is high for an asset, its value will be higher and it will be sold or restructured faster. 7
8 The LGD is known when defaults are resolved and therefore, by definition, the later years of the database (2010 and 2011) only contain defaults with shorter workout periods. Since a shorter workout period is related to a smaller LGD, the LGD is underestimated in the final years. Due to the data issues described above, we restrict our analysis to the period. [Figure 2 about here.] Empirical distribution The sample consists of 13,020 observations of mostly European defaults. We need to exclude some observations to ensure data quality, but still get one of the largest datasets for bank loan LGD studied thus far. Grunert and Weber (2009) summarize the empirical studies on bank loan recovery rates. The largest dataset they found studied 5,782 observations over the period More recently, Calabrese and Zenga (2010) study a portfolio of 149,378 Italian bank loan recovery rates and Hartmann-Wendels et al. (2014) consider 14,322 defaulted German lease contracts. However, these studies focus on defaults from a single country while our dataset is much richer. It is a stylized fact that LGD follows a bimodal distribution with most observations close to 0 or 1, see for example Schuermann (2004). In most cases, there is either no or a full loss on the default. Figure 1 shows that this also holds for our sample. By far most losses are close to zero, but there is also a peak at 1. The data is limited to 0.5 to 1.5. Nevertheless, more than 10% of the observations (1,618 of 14,805 for the sample) are outside the [0, 1] interval. The yearly average LGD time series is shown in figure 3. It is the average LGD for defaults per year. The LGD decreases after a high value in 2002 and stays fairly stable for the rest of the period. In 2008 though, there is a peak in the series due to the crisis. [Figure 3 about here.] 8
9 2.2 Default rate The default rate (DR) is the realization of the probability of default (PD) for the portfolio and is the fraction of defaults in a year compared to the portfolio size. The DR is reported annually. The PECDC was initially founded to pool defaults, not default rates. In 2009, the observed DR database was started. Since not all banks participated from the start and some loans were out of the internal administration, the first years contain less observations. Figure 3 shows the yearly default rates. The amount of loans/defaults in 2002 is only 1.3%/2.2% of the amount in 2003 and is therefore not representative of the entire portfolio. By matching the time period of the LGD dataset, we only use the data. The DR dataset is dominated by Southern European loans (16.6% and 68.1% of all loans and defaults), even though the LGD sample only contains 7.3% Southern European defaults. The defaults in the LGD sample, in contrast, are for 71.1% from Western and Northern Europe. Figure 4 shows that the Southern European DR is quite high, mainly due to their relatively high DR for small and medium enterprises (SME), and significantly influences the DR time series. We exclude these loans from the DR sample, such that it more accurately matches the LGD sample. We exclude Oceania loans for the same reason, although its impact on the average DR is less severe. They represent 14.2% of all loans in the DR database, but only 1.5% of the defaults in the LGD database. An advantage of using both LGD and DR from the PECDC is that we are able to investigate subsets based on geography and industry. [Figure 4 about here.] 3 Model specification We propose using a mixed-measurement model, where the observations are allowed to follow a different distribution, as long as they depend on the single latent factor α. This is similar to the model by Creal et al. (2014). 9
10 3.1 Loss Given Default The LGD usually follows a Beta distribution, motivated by the fact that it is defined on the [0, 1] interval, see for example Schuermann (2004). This is not necessarily true for bank loan LGD, see section 2.1. Considering the empirical distribution in figure 1, we propose a mixture of two normals and define distribution 1 and 2 as the distribution for good and bad losses. ( n ) N yit l j=1 I[i G j]µ j1, σε 2 ( n ) N j=1 I[i G j]µ j2, σε 2 if s it = 0 (good loss), if s it = 1 (bad loss), (1) with yit l the LGD of loan i at time t for i = 1,..., N 1 and t = 1,... T. I[A] is the indicator function, which is 1 if A is true and 0 otherwise, and G 1,..., G n are the sets in which the loans can be grouped based on the loan characteristics such as by industry or asset class. We set µ 1 µ 2 for identification. s it is the unobserved state, which is 1 if the LGD of loan i at time t is a bad loss and 0 otherwise. We define the probability of being in state 1 to vary across loan characteristics and time as follows ( n ) P (s it = 1) = p jt = Λ I [i G j ] (λ j0 + λ j1 α t ), (2) j=1 where Λ(x) = exp(x)/(1 + exp(x)), the logistic function, and α t the latent signal at time t. The distribution can change in three ways, influencing the average LGD: (i) a change in the mixture probability p jt, (ii) a shift of the entire distribution by a common change in µ 1 and µ 2 or (iii) a shift in the difference between µ 1 and µ 2. We know most LGD are (close to) 0 or 1, so we do not expect the means to vary much. Estimating the model such that a latent signal underlies possible changes in the means confirms this, as the variation of the signal is smaller in such cases. We find that most variation is explained by p jt. Therefore, we only model a larger (smaller) average LGD in a time period is caused by an increase (decrease) in the proportion of bad to good losses. We allow the relation with the latent variable to vary per loan characteristic, because 10
11 the probability of a bad loss might be different or less time-varying depending on the characteristic. For example, senior loans might be less volatile than subordinate loans. Senior lenders get paid back first, so if it is a bad loss the probability of recovering money is higher than for subordinate lenders. Further, the pattern over time might differ per industry. Some industries are sensitive to cycles, such as financials, while consumer staples might be less sensitive or the timing of ups and downs is. Schleifer and Vishny (1992) argue that the downturn LGD only exists on an industry level. Both normal distributions have the same variance, which is necessary for identification. Otherwise the large proportion of good losses very close to 0 yields a very small estimate of σ1 2 and a relatively large one for σ2. 2 This implies that an LGD smaller than 0 comes from the second distribution and we can no longer identify the two distributions as those for good and bad losses. 3.2 Default rate The PECDC database for default rates contains the number of defaults and loans, which means that we have default rate and size information of the portfolio. We assume that the probability of a default of loan i at time t follows a Bernoulli distribution. Further, to include the size information this assumption implies modeling the defaults in period t to be a realization of a binomial distribution, similar to Bruche and González-Aguado (2010). The distribution depends on the latent signal α t through the probability of default q t. y d t Binomial(L t, q t ), (3) q t = Λ (γ 0 + γ 1 α t ), (4) with yt d the number of defaults and L t the number of loans at time t. The defaults and loans are observed yearly, not quarterly. We set the third quarter equal to the yearly observation, since this is approximately the middle of year, and define the other quarters as missing. Using the same method, Bernanke et al. (1997) construct 11
12 a monthly time series from a quarterly observed variable. 3.3 Missing values and identification We do not observe multiple defaults per loan. We treat the periods where the loan is not defaulted as missing. One of the advantages of a state space model is its ability to easily handle missing values, which we do as follows. The densities are cross-sectionally independent given α, such that log p(y t α) = N δ it log p i (y it α), (5) i=1 where δ it is an indicator function which is 1 if y it is observed and 0 if it is missing. Therefore, only the observed values influence the log-likelihood, given in appendix B.2. The signal of the non-gaussian state space model follows an AR(1) process α t+1 = ρα t + η t, (6) with η t N(0, ση) 2 and initial state distribution α 1 N(a 1, P 1 ), where a 1 and P 1 parameters to be estimated. We have one latent Gaussian signal α underlying the default rate and the probability of a bad loss (µ 1 µ 2 ). The relation of the observations with the signal is determined by the parameters λ jk and γ k and is allowed to vary across j, i.e. we do not impose λ 1k =... = λ nk = γ k, for k = 0, 1. We set λ 10 = 0 and λ 11 = 1 or γ 0 = 0 and γ 1 = 1 for identification purposes 2. 4 Methodology Analytically solving the model for the parameters θ and the signal α is not possible. Also, direct numerical optimization is infeasible, due to the dimensionality of the optimization problem. It is possible to optimize the parameters θ if α were known and vice versa. 2 Otherwise, α is not identified since we can multiply it with an arbitrary value and divide the λ jk and γ k by the same value, yielding the same estimates for p jt and q t. 12
13 Therefore, we employ the Expectation Maximization (EM) algorithm, introduced by Dempster et al. (1977) and developed for state space models by Shumway and Stoffer (1982) and Watson and Engle (1983). The algorithm is a well-known iterative procedure consisting of repeating two steps, which has been proven to increase the log-likelihood for every iteration. If the log-likelihood is given by l(θ Y, α, S), then the m-th iteration of the EM algorithm is: 1. E-step: Given the estimate of the m-th iteration θ (m), take the expectation with respect to the unobserved α and the states S, given Y. ( )] E α,s Y [l θ (m) Y, α, S. (7) This implies that we need estimates for P (s it = 1) and the latent signal α underlying the probability given the observed LGD and defaults. For the smoothed estimates of the probability of a bad loss π it = P (s it = 1 Y, α), we use the E-step of the EM algorithm for a finite mixture of normals, see for example Hamilton (1994). Since we have a nonlinear non-gaussian state space model, methods for linear Gaussian state space models like the Kalman filter no longer valid. Following Jungbacker and Koopman (2007), we therefore apply importance sampling to get an estimate of α. Appendix B.1 provides an outline of the method. We set the number of replications R = 250 and employ 4 antithetic variables in the importance sampling algorithm. The expected log-likelihood (7) is given in appendix B M-step: Obtain a new estimate θ (m+1) by maximizing the expected log-likelihood with respect to θ. max E α,s Y [l (θ Y, α, S)]. (8) θ Maximizing (8) involves the method of maximum likelihood. To make the analysis 13
14 feasible, we use analytical solutions if possible since the EM algorithm itself is already a numerical optimization. We only resort to numerical optimization if it is impossible to solve analytically. That is, we numerically optimize γ k and λ jk, for j = 1,..., n and k = 0, 1 using a Newton-Raphson procedure and use analytical solutions for the other parameters. The analytical solutions are given in appendix B.5.1. The gradient and Hessian used in the Newton-Raphson procedure are given in appendix B.5.2. These steps are repeated until the log-likelihood has converged. We initialize the EM algorithm with neutral estimates. We set the means at 0 and 1 for good and bad losses and the factor such that p jt = q t = 0.5, with intercepts λ j0 = γ 0 = 0 and coefficients λ j1 = γ 1 = 1 all equal. The other parameters are determined by sample estimates. 5 Results 5.1 Estimation results First, we present results for the simple model with time-variation but without cross-sectional variation by loan characteristics. The parameter estimates are presented in table 1. The model is able to distinguish two distributions from the data. The first mean is and the second This confirms our interpretation as the distributions of good and bad losses. We can interpret p jt as the ex-ante probability of a bad loss. The small estimate of σ ε further indicates that the mixture of normals is suitable for describing the bank loan LGD distribution. Parameter σ ε is an estimate for the width of the normal distributions. About 95% of the LGD for good losses are between 0.15 and 0.25 and between 0.65 and 1.05 for bad losses. The model yields an R 2 of approximately 0.85, indicating a good fit for the bank loan LGD. The factor that underlies the probability of a bad loss is presented in figure 5. The interpretation of the factor and its coefficients is straightforward due to the monotonicity 14
15 of the logistic transformation. The estimated factor is on average below zero. A positive coefficient means that an increase of the factor, increases the ex-ante probability of a bad loss or default as well. The estimated factor shares features with the average LGD. The first few years are characterised by a downward trend, where in 2007 the level increases until the end of the sample. It differs however, for a couple of reasons. First, the factor is a combination of the LGD and the default rates and is estimated using both sources of credit loss information. Second, the factor is a smoothed estimate, which means that it is conditional on all information up to time T. It is not simply the average of the LGD at time t, but contains information from the preceding and following observations. Third, the average LGD at time t is an unobserved variable, constructed ex-post. Using the average LGD would have induces a generated regressor problem, which leads to incorrect inference (Pagan, 1984). We cannot directly compare the sensitivity towards the factor between LGD and the defaults via the coefficient λ j1 because of the nonlinearity of the logistic function. Instead, we compare the average marginal effect of the signal. It is the average of the first derivative of the probability towards the signal α t over t. The average marginal effects are presented in table 2. The coefficients λ 1 and δ 1 are both positive which means that the probabilities of a bad loss p t and of a default q t respond similarly. The average marginal effect suggests that the factor has a stronger effect on the probability of a bad loss, so q t fluctuates less than p t. They follow the same pattern over time, but at a different level, see figure 5. This is in line with research on losses on bonds, where DR and LGD are time varying through a common cyclical component. The coefficients for the ex-ante probabilities are significantly different from 0, indicating that the factor has a significant effect on the probabilities. This is strengthened by a large average marginal effect of for p t. This implies that for an increase of the factor by one standard deviation, the probability of a bad loss increases by 2.5%. The probability of a bad loss is an important component for the average LGD at time t. The time variation in the factor suggests cyclicality in the LGD through the probability of a 15
16 bad loss, in the same direction as the DR. Our findings have implications for monitoring. Since the proportion of bad to good losses changes over time, it is important to keep track of all loans. If the LGD of each loan would increase, additional monitoring would not be effective. If only the LGD of the bad would increase, it would be sufficient to monitor only the bad losses, although it may require additional monitoring to identify them. [Figure 5 about here.] [Table 1 about here.] [Table 2 about here.] 5.2 Relation with macroeconomic variables Table 1 shows that ρ is estimated close to 1, implying a persistent factor. The shape of the factor and the LGD and default information used to estimate it suggests that the factor can be interpreted as a credit cycle and credit risk indicator, where a decrease indicates better credit conditions with lower probability of a bad loss and defaults. We expect this credit cycle to be related with important signals of the state of the economy. We check the correlation with various macroeconomic variables, whose definitions are given in table 3. We investigate different lags and leads because the lead-lag relation between the economy and the credit conditions is unclear. On one hand, it could be that if the economy deteriorates, it takes a few months before companies are affected and go into default. On the other hand, the defaults of many companies could turn the economy into distress. The workout period further distorts the relation. The LGD are grouped by default date, but are made up out of the cash flows in the recovery period, which depend upon the state of the economy during the recovery period. The workout period can be shorter than a year or take up to 5 years. Figure 6 presents the correlations between the factor and the macroeconomic variables. A lag of 5 means that we have the correlation between the factor at time t and the 16
17 macroeconomic variable at time t 5. Due to the short time period, confidence bounds are wide and significance tests have low power. The cyclicality of the factor can be seen with most macroeconomic variables. The macroeconomic variables are known to be cyclical and we find a strong correlation with them. The cyclicality is observed for the GDP growth (GDP), Bloomberg Financial Conditions Index (BFCI), NBER recession dummy (NBER), unemployment rate (UR) and short-term interest rate (SIR). The timing of the correlations indicates that the credit conditions are first affected by the economy, considering the strong negative relation with lagged GDP, BFCI and NBER. This may be because it takes some time for defaults to materialize. Companies first utilize their reserves before they go into default. Also, it takes a quarter of missing payments before it is considered a default. On the other hand, the credit conditions seem to have an effect on the future economic conditions considering GDP, SIR and the unemployment rate (UR). This could be due to worsened credit conditions, such that more companies go into default and have bad losses. It leads to more people losing jobs as less companies recover, increasing the UR and decreasing GDP. The relation with SIR can be explained by the fact that it affects the recovery payments. These are made during the recovery period after the default date, where the LGD are grouped. The correlations indicate that the factor is related to the economy and suggest a cyclicality for the factor. From the lead-lag patterns, it is more plausible that the credit conditions are following recessions, rather than cause it. After that, the economy is affected by the worsening of credit conditions, affecting mainly the UR. [Table 3 about here.] [Figure 6 about here.] 5.3 Loan characteristics The factor might vary across loan characteristics, such as seniority and industry. We consider two possibilities: (i) one factor is underlying all groups, but the parameters 17
18 are allowed to vary across characteristics, and (ii) every characteristic has a different underlying factor, for example we estimate the factor only for secured loans and separately for unsecured loans. We do this across security, seniority, asset class, industry and country, but only select categories with a reasonable number of observations (> 800). The parameter estimates in table 1 show that the models are always able to identify two distributions. The means for good losses are estimated between 0.03 and 0.06 and for bad losses between 0.82 and The level of the individual factors underlying the ex-ante probabilities is in line with previous research. For example the secured loans have a lower factor than unsecured loans, which means that the probability of a bad loss or default is higher for unsecured loans. The mixture is suitable for bank loan LGD, regardless of the characteristics of the loans. The various factors share some main features across subsets. Overall, we observe a decrease in the first period, followed by an increase from 2007 onwards. We study the difference in cyclicality across industries and countries. The industry factors in figure 7 vary mostly by the volatility of the factor. We can divide the 5 industries into 2 groups based on the correlations between the factors in figure 7: (i) Consumer Discretionary (CD), Financials (FIN) and Industrials (IND) and (ii) Consumer Staples (CS) and Information Technology (IT). The correlation between the factors within the group is larger than 0.40 and mostly between 0.65 and The first group is identified as more volatile and the second group as more stable. The country factors follow a similar pattern but volatility varies. We only observe a different pattern for loans in the UK. Allowing the parameters to vary across securities provides a good fit with a significant increase of the log-likelihood. The marginal effects in table 2 show that the estimated factor has a larger effect on the secured loans than unsecured loans, but responds in the same direction. A change of one standard deviation in the factor increases of the probability of a bad loss by 2.7% and 1.8% for secured and unsecured loans. This is also clear from the factor estimated on each group of loans in figure 7. The factors show similar patterns, but the factor for secured loans is more volatile in the current sample period. Even though one would have expected that secured loans are less sensitive to downturns. 18
19 A plausible explanation is the value of the securities backing the loan are cyclical. For example, demand for the collateral may vary depending on the state of the economy and its value therefore changes over time. The variation of losses over time for the large corporate (LC) and small and medium enterprises (SME) asset classes is almost identical, considering the average marginal effect in table 2. The factors in figure 7 visualize this, with 2007 as obvious exception. The differences between the factors across loan characteristics is subject to data issues. Applying the model to the subsets yields a factor that is sensitive to outliers, because of the small number of observations per time period. However, we still infer a grouping of industries, that the secured loans appear more cyclical than unsecured loans and no differences between asset classes. It yields more insight in the details on the source of variation in average LGD for bank loans. This could result into more targeted monitoring. Further, modeling the losses could be more detailed, allowing for differences across loan characteristics, which might be interesting for banks operating in a certain sector. [Figure 7 about here.] 6 Conclusion We investigate cross-sectional and time-varying properties of losses on bank loans. The data from the PECDC database provides a unique dataset in terms of size and cross-sectional span. We use a mixture of normals, for good and bad losses. The mixture provides a good fit and is suitable for modeling bank loan loss given default. We show that time variation of the loss given default is mostly explained by the probability of a bad loss. The bimodal empirical distribution is in line with stylized facts on loss given default. The loss given default differs for bank loans from those for bonds, most importantly since the value can be outside the [0, 1] interval. We estimate a non-gaussian state space model using simulation based methods, yielding a factor that drives the probability of a bad loss and of a default. The factor 19
20 exhibits cyclical behavior and can be related to macroeconomic variables. Specifically, we find that low GDP growth and deteriorating financial conditions coincide with bad credit conditions. Further, bad credit conditions induce an increase of the unemployment rate in the future because less companies are able to recover and more people lose their jobs. Across loan characteristics, the factor shows similarities in the overall pattern. The variation is mostly in terms of volatility. We identify two groups of industries on this basis and find that secured loans are more sensitive to the credit cycle than unsecured loans. Our results have implications for the monitoring of loans by banks. They should focus on the proportion of bad to good losses as this is the main source for the time variation of the loss given default. 20
21 References Allen, L. and Saunders, A. (2003). A survey of cyclical effects in credit risk measurement models. BIS Working Papers, (126). Altman, E., Brady, B., Resti, A., and Sironi, A. (2005). The Link between Default and Recovery Rates: Theory, Empirical Evidence, and Implications. The Journal of Business, 78(6): Basel Committee on Banking Supervision (2005). Guidance on Paragraph 468 of the Framework Document. Bank for International Settlements. Bernanke, B., Gertler, M., and Watson, M. (1997). Systematic Monetary Policy and the Effects of Oil Price Shocks. Brookings Papers on Economic Activity, 1: Bruche, M. and González-Aguado, C. (2010). Recovery rates, default probabilities, and the credit cycle. Journal of Banking & Finance, 34(4): Calabrese, R. and Zenga, M. (2010). Bank loan recovery rates: Measuring and nonparametric density estimation. Journal of Banking & Finance, 34(5): Creal, D., Schwaab, B., Koopman, S., and Lucas, A. (2014). Observation Driven Mixed-Measurement Dynamic Factor Models with an Application to Credit Risk. Review of Economics and Statistics, 96. De Jong, P. and Shephard, N. (1995). The simulation smoother for time series models. Biometrika, 82(2): Dempster, A., Laird, N., and Rubin, D. (1977). Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society, 39(1):1 38. Durbin, J. and Koopman, S. J. (2012). Time series analysis by state space methods. Oxford University Press. Grunert, J. and Weber, M. (2009). Recovery rates of commercial lending: Empirical evidence for German companies. Journal of Banking & Finance, 33:
22 Hamilton, J. D. (1994). Time series analysis. Princeton University Press. Hartmann-Wendels, T., Miller, P., and Töws, E. (2014). Loss given default for leasing: Parametric and nonparametric estimations. Journal of Banking & Finance, 40: Höcht, S. and Zagst, R. (2007). Loan Recovery Determinants - A Pan-European Study. Working Paper. Jungbacker, B. and Koopman, S. J. (2007). Monte carlo estimation for nonlinear non-gaussian state space models. Biometrika, 94(4): Needham, C., Verde, M., Greening, T., Hatton, J., and Moss, J. (2012). Fitch Ratings Global Corporate Finance 2011 Transition and Default Study. Fitch Ratings Credit Market Research. Pagan, A. (1984). Econometric Issues in the Analysis of Regressions with Generated Regressors. International Economic Review, pages Schleifer, A. and Vishny, R. (1992). Liquidation Values and Debt Capacity: A Market Equilibrium Approach. The Journal of Finance, 47(4): Schuermann, T. (2004). What Do We Know About Loss Given Default? In D. Shimko (ed.), Credit Risk Models and Management, 2nd Edition. London: Risk Books. Shumway, R. and Stoffer, D. (1982). An Approach to Time Series Smoothing and Forecasting using the EM Algorithm. Journal of Time Series Analysis, 3(4): Van Beest, M. (2008). Reference Data Set (RDS). NIBC Bank Memo. Vazza, D., Kraemer, N., Richhariya, N., and Sakhare, A. (2012). Default, Transition, and Recovery: 2011 Annual Global Corporate Default Study And Rating Transitions. Standard and Poor s Global Credit Portal RatingsDirect. Watson, M. and Engle, R. (1983). Alternative Algorithms for the Estimation of Dynamic Factor, Mimic and Varying Coefficient Regression Models. Journal of Econometrics, 23:
23 Appendix A Data A.1 Data filter Following Höcht and Zagst (2007), who perform research on the PECDC data, and Van Beest (2008), who constructs NIBC s reference data set 3 from the NIBC subset, we apply the following filters to the LGD database. EAD e 100, 000. The paper focuses on loans where there has been an actual (possible) loss, so EAD should be at least larger than 0. Furthermore, there are some extreme LGD values in the database for small EAD. To account for this noise, loans with EAD smaller than e 100, 000 are excluded. 10% < ((CF + CO) (EAD EAR)) /(EAD + PA) < 10%, where CF cash flows, CO charge-offs and PA principal advances. The cash flows that make up the LGD should be plausible, since they are the major building blocks of the LGD. A way of checking this is by looking at under-/overpayments. The difference between the EAD and the exposure at resolution (EAR), where resolution is the moment where the default is resolved, should be close to the sum of the cash flows and charge-offs. The cash flow is the money coming in and the charge-off is the acknowledgement of a loss in the balance sheet, because the exposure is expected not to be repaid. Both reduce the exposure and should explain the difference between EAD and EAR. There might be an under- or overpayment, resulting in a difference. To exclude implausible cash flows, these loans are excluded when they are more than or equal to 10% of the EAD and principal advances (PA). The 10% is a choice of the PECDC. 0.5 LGD 1.5. Although theoretically, LGD is expected between 0 and 1, it is possible to have an LGD outside this range, e.g. due to principal advances or a profit on the sale of assets. Abnormally high or low values are excluded. They are implausible and influence LGD statistics too much. 3 A reference data set is a subset of the database that is representative for the user s portfolio, such that they can use it for statistical purposes. 23
24 No government guarantees. The PECDC contains loans with special guarantees from the government. Most of the loans are subordinated, but due to the guarantee, the average of the subordinated LGD is lower than expected. Because the loans are very different from others with the same seniority and to prevent underestimation of the subordinated LGD, these loans are excluded from the dataset. Some PECDC members also filter for high principle advances ratios, which is the sum of the principal advances divided by the EAD. Even though high ratios are plausible, they are considered to influence the data too much and therefore exclude loans with ratios larger than 100%. NIBC does include these loans, because they are supposed to contain valuable information and the influence of outliers is mitigated since they cap their LGD to 1.5. The data shows that the principal advances ratio does not exceed 100%, so applying the filter does not affect the data and is therefore not considered. Appendix B Derivations B.1 Importance sampling We outline the simulation based method of importance sampling, which we use to evaluate the non-guassian state space model. For more information on importance sampling for state space models, see for example Durbin and Koopman (2012). Consider the following nonlinear non-gaussian state space model with a linear and Gaussian signal. y t p(y t α t ), (9) α t+1 = ρα t + η t, (10) with η t NID(0, σ 2 ε) for t = 1,..., T, where y t is an N 1 observation vector, α t the signal at time t. For notational convenience, we express the state space model in matrix form. We stack the observations into an N T observation matrix Y = (y 1,..., y T ) and 24
25 T 1 signal vector α = (α 1,..., α T ) such that we have Y p(y α), (11) α N(µ, Ψ). (12) The method of importance sampling is a way of evaluating integrals by means of simulation. It can be difficult or infeasible to sample directly from p(α Y ), which is the case for non-gaussian state space models. Therefore, an importance density g(α Y ) is used to approximate the p(α Y ) from which it is easier to sample. In particular, consider the evaluation of the expected value of the function x(α). x = E [x(α) Y ] = x(α)p(α Y )dα = x(α) p(α Y ) [ g(α Y ) g(α Y )dα = E g x(α) p(α Y ) ]. g(α Y ) (13) For a non-gaussian state space model with Gaussian signal, this can be rewritten into x = E g [x(α)w(α, Y )], E g [w(α, Y )] (14) p(y α) w(α, Y ) = g(y α), (15) which contains densities that are easy to sample from. Then x is estimated by replacing the expectations with its sample estimates. The function to be estimated x(α) can be any function of α. For example, the mean is estimated by setting x(α) = α. For the estimation of the log-likelihood L(θ Y ) = p(y θ) we have L(θ Y ) = p(α, Y ) p(α, Y ) g(α Y )dα = g(y ) g(α Y ) g(α, Y ) g(α Y )dα = L g(θ Y )E g [w(α, Y )], (16) where L g (θ Y ) = g(y ) is the likelihood of the approximating Gaussian model. This is estimated by the sample analog ˆL g (θ) w, with w = (1/R) R r=1 w(α(r), Y ) where α (r), 25
26 r = 1,..., R, are independent draws from g(α Y ), using the simulation smoother. Its log version is log ˆL(θ Y ) = log ˆL g (θ Y ) log w. B.1.1 Mode estimation The importance density g(α Y ) must be chosen such that it is easy to sample from and approximates the target density well. If the importance density does not share the support of the target density, the estimation will be inaccurate. An example of a suitable importance density is to take a Gaussian density that has the same mean and variance as the target density. It is possible to sample from p(α Y ) for a Gaussian state space model using the simulation smoother developed by De Jong and Shephard (1995). Therefore, we would like to get a Gaussian model that approximates the non-gaussian model, defined by (9) and (10). The approximating Gaussian model can be obtained by mode estimation. It is a Newton-Raphson procedure to get the mode of signal α for a non-gaussian state space model. The procedure of mode estimation is outlined below, including how it results into an approximating Gaussian state space model. Given an initial guess g for the mode of α, for example based on your knowledge of the data, we have the following Newton-Raphson procedure to get a new estimate of the mode. g + = g ( p(α Y ) α=g ) 1 ṗ(α Y ) α=g, (17) with ṗ( ) = log p( )/ α, a T 1 vector, and p( ) = 2 log p( )/ α α, a T T matrix. We cannot directly apply the procedure since p(α Y ) is unknown. But Bayes rule enables us to rewrite the smoothed log density as log p(α Y ) = log p(y α) + log p(α) log p(y ), (18) where log p(y α) = T t=1 log p(y t α t ) = T t=1 N i=1 log p i(y it α t ), p(α) is given in (12) and 26
27 the last term does not depend on α and can thus be left unspecified. p i (y it α t ) may vary over i, so observations are allowed have different distributions. We get ṗ(α Y ) = ṗ(y α) Ψ 1 (α µ), (19) p(α Y ) = p(y α) Ψ 1, (20) where ṗ(y α) = (ṗ 1 (y 1 α 1 ),..., ṗ T (y T α T )) and p(y α) = diag( p 1 (y 1 α 1 ),..., p T (y T α T )), with ṗ t ( ) = log p( )/ α t and p t ( ) = 2 log p( )/ α t α t. If we plug in the expressions (19) and (20) in (17), we get g + = g ( p(y α) α=g Ψ 1) 1 (ṗ(y α) α=g Ψ 1 (α µ) ) (21) = (Ψ 1 + A 1 ) 1 (A 1 z + Ψ 1 µ), (22) z = g + Aṗ(Y α) α=g, (23) A = ( p(y α) α=g ) 1. (24) where z = (z 1,..., z T ) a T 1 vector and A = diag(a 1,..., A T ) a T T matrix. It can be shown that (22) is the output from the Kalman filter and smoother for a linear Gaussian model with observation vector z and variance matrix A. From mode estimation, we have thus obtained the following approximating Gaussian model. z t = α t + u t, (25) α t+1 = ρα t + η t, (26) where u t NID(0, A t ) and η t NID(0, σε) 2 for t = 1,..., T, with z t and A t defined below (24) and (23). The Newton-Raphson procedure described above is equivalent to repeatedly applying the Kalman filter and smoother to this model. The density p(α z) from the model is Gaussian and approximates the non-gaussian target model well, since it has the same mean and variance. Therefore, the density p(α z) from (25) and (26) is a great choice as an importance density. 27
Online Appendix to Dynamic factor models with macro, credit crisis of 2008
Online Appendix to Dynamic factor models with macro, frailty, and industry effects for U.S. default counts: the credit crisis of 2008 Siem Jan Koopman (a) André Lucas (a,b) Bernd Schwaab (c) (a) VU University
More informationGlobal Credit Data SUMMARY TABLE OF CONTENTS ABOUT GCD CONTACT GCD. 15 November 2017
Global Credit Data by banks for banks Downturn LGD Study 2017 European Large Corporates / Commercial Real Estate and Global Banks and Financial Institutions TABLE OF CONTENTS SUMMARY 1 INTRODUCTION 2 COMPOSITION
More informationPRE CONFERENCE WORKSHOP 3
PRE CONFERENCE WORKSHOP 3 Stress testing operational risk for capital planning and capital adequacy PART 2: Monday, March 18th, 2013, New York Presenter: Alexander Cavallo, NORTHERN TRUST 1 Disclaimer
More informationSession on Macro Risk. Discussion. Olivier Loisel. crest. 8 th Financial Risks International Forum Scenarios, Stress, and Forecasts in Finance
Session on Macro Risk Discussion Olivier Loisel crest 8 th Financial Risks International Forum Scenarios, Stress, and Forecasts in Finance Paris, March 31, 2015 Olivier Loisel, Crest Discussion on Macro
More informationA potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples
1.3 Regime switching models A potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples (or regimes). If the dates, the
More informationGlobal Credit Data by banks for banks
9 APRIL 218 Report 218 - Large Corporate Borrowers After default, banks recover 75% from Large Corporate borrowers TABLE OF CONTENTS SUMMARY 1 INTRODUCTION 2 REFERENCE DATA SET 2 ANALYTICS 3 CONCLUSIONS
More informationGrowth Rate of Domestic Credit and Output: Evidence of the Asymmetric Relationship between Japan and the United States
Bhar and Hamori, International Journal of Applied Economics, 6(1), March 2009, 77-89 77 Growth Rate of Domestic Credit and Output: Evidence of the Asymmetric Relationship between Japan and the United States
More informationModelling Bank Loan LGD of Corporate and SME Segment
15 th Computing in Economics and Finance, Sydney, Australia Modelling Bank Loan LGD of Corporate and SME Segment Radovan Chalupka, Juraj Kopecsni Charles University, Prague 1. introduction 2. key issues
More informationIs there a decoupling between soft and hard data? The relationship between GDP growth and the ESI
Fifth joint EU/OECD workshop on business and consumer surveys Brussels, 17 18 November 2011 Is there a decoupling between soft and hard data? The relationship between GDP growth and the ESI Olivier BIAU
More informationMicroeconomic Foundations of Incomplete Price Adjustment
Chapter 6 Microeconomic Foundations of Incomplete Price Adjustment In Romer s IS/MP/IA model, we assume prices/inflation adjust imperfectly when output changes. Empirically, there is a negative relationship
More informationAn EM-Algorithm for Maximum-Likelihood Estimation of Mixed Frequency VARs
An EM-Algorithm for Maximum-Likelihood Estimation of Mixed Frequency VARs Jürgen Antony, Pforzheim Business School and Torben Klarl, Augsburg University EEA 2016, Geneva Introduction frequent problem in
More informationEstimating Macroeconomic Models of Financial Crises: An Endogenous Regime-Switching Approach
Estimating Macroeconomic Models of Financial Crises: An Endogenous Regime-Switching Approach Gianluca Benigno 1 Andrew Foerster 2 Christopher Otrok 3 Alessandro Rebucci 4 1 London School of Economics and
More informationOn modelling of electricity spot price
, Rüdiger Kiesel and Fred Espen Benth Institute of Energy Trading and Financial Services University of Duisburg-Essen Centre of Mathematics for Applications, University of Oslo 25. August 2010 Introduction
More informationLecture 9: Markov and Regime
Lecture 9: Markov and Regime Switching Models Prof. Massimo Guidolin 20192 Financial Econometrics Spring 2017 Overview Motivation Deterministic vs. Endogeneous, Stochastic Switching Dummy Regressiom Switching
More informationEstimating Mixed Logit Models with Large Choice Sets. Roger H. von Haefen, NC State & NBER Adam Domanski, NOAA July 2013
Estimating Mixed Logit Models with Large Choice Sets Roger H. von Haefen, NC State & NBER Adam Domanski, NOAA July 2013 Motivation Bayer et al. (JPE, 2007) Sorting modeling / housing choice 250,000 individuals
More informationOnline Appendix for Forecasting Inflation using Survey Expectations and Target Inflation: Evidence for Brazil and Turkey
Online Appendix for Forecasting Inflation using Survey Expectations and Target Inflation: Evidence for Brazil and Turkey Sumru Altug 1,2 and Cem Çakmaklı 1,3 1 Department of Economics, Koç University 2
More informationUnobserved Heterogeneity Revisited
Unobserved Heterogeneity Revisited Robert A. Miller Dynamic Discrete Choice March 2018 Miller (Dynamic Discrete Choice) cemmap 7 March 2018 1 / 24 Distributional Assumptions about the Unobserved Variables
More informationGRANULARITY ADJUSTMENT FOR DYNAMIC MULTIPLE FACTOR MODELS : SYSTEMATIC VS UNSYSTEMATIC RISKS
GRANULARITY ADJUSTMENT FOR DYNAMIC MULTIPLE FACTOR MODELS : SYSTEMATIC VS UNSYSTEMATIC RISKS Patrick GAGLIARDINI and Christian GOURIÉROUX INTRODUCTION Risk measures such as Value-at-Risk (VaR) Expected
More informationModelling the Sharpe ratio for investment strategies
Modelling the Sharpe ratio for investment strategies Group 6 Sako Arts 0776148 Rik Coenders 0777004 Stefan Luijten 0783116 Ivo van Heck 0775551 Rik Hagelaars 0789883 Stephan van Driel 0858182 Ellen Cardinaels
More informationEstimating LGD Correlation
Estimating LGD Correlation Jiří Witzany University of Economics, Prague Abstract: The paper proposes a new method to estimate correlation of account level Basle II Loss Given Default (LGD). The correlation
More informationGame-Theoretic Approach to Bank Loan Repayment. Andrzej Paliński
Decision Making in Manufacturing and Services Vol. 9 2015 No. 1 pp. 79 88 Game-Theoretic Approach to Bank Loan Repayment Andrzej Paliński Abstract. This paper presents a model of bank-loan repayment as
More informationIdiosyncratic risk, insurance, and aggregate consumption dynamics: a likelihood perspective
Idiosyncratic risk, insurance, and aggregate consumption dynamics: a likelihood perspective Alisdair McKay Boston University June 2013 Microeconomic evidence on insurance - Consumption responds to idiosyncratic
More informationEconomics Letters 108 (2010) Contents lists available at ScienceDirect. Economics Letters. journal homepage:
Economics Letters 108 (2010) 167 171 Contents lists available at ScienceDirect Economics Letters journal homepage: www.elsevier.com/locate/ecolet Is there a financial accelerator in US banking? Evidence
More information1 Explaining Labor Market Volatility
Christiano Economics 416 Advanced Macroeconomics Take home midterm exam. 1 Explaining Labor Market Volatility The purpose of this question is to explore a labor market puzzle that has bedeviled business
More informationSolving dynamic portfolio choice problems by recursing on optimized portfolio weights or on the value function?
DOI 0.007/s064-006-9073-z ORIGINAL PAPER Solving dynamic portfolio choice problems by recursing on optimized portfolio weights or on the value function? Jules H. van Binsbergen Michael W. Brandt Received:
More informationHeterogeneous Hidden Markov Models
Heterogeneous Hidden Markov Models José G. Dias 1, Jeroen K. Vermunt 2 and Sofia Ramos 3 1 Department of Quantitative methods, ISCTE Higher Institute of Social Sciences and Business Studies, Edifício ISCTE,
More informationBivariate Birnbaum-Saunders Distribution
Department of Mathematics & Statistics Indian Institute of Technology Kanpur January 2nd. 2013 Outline 1 Collaborators 2 3 Birnbaum-Saunders Distribution: Introduction & Properties 4 5 Outline 1 Collaborators
More informationInt. Statistical Inst.: Proc. 58th World Statistical Congress, 2011, Dublin (Session CPS001) p approach
Int. Statistical Inst.: Proc. 58th World Statistical Congress, 2011, Dublin (Session CPS001) p.5901 What drives short rate dynamics? approach A functional gradient descent Audrino, Francesco University
More informationStochastic Volatility (SV) Models
1 Motivations Stochastic Volatility (SV) Models Jun Yu Some stylised facts about financial asset return distributions: 1. Distribution is leptokurtic 2. Volatility clustering 3. Volatility responds to
More informationDiscussion Paper No. DP 07/05
SCHOOL OF ACCOUNTING, FINANCE AND MANAGEMENT Essex Finance Centre A Stochastic Variance Factor Model for Large Datasets and an Application to S&P data A. Cipollini University of Essex G. Kapetanios Queen
More informationLecture 8: Markov and Regime
Lecture 8: Markov and Regime Switching Models Prof. Massimo Guidolin 20192 Financial Econometrics Spring 2016 Overview Motivation Deterministic vs. Endogeneous, Stochastic Switching Dummy Regressiom Switching
More informationEstimating a Dynamic Oligopolistic Game with Serially Correlated Unobserved Production Costs. SS223B-Empirical IO
Estimating a Dynamic Oligopolistic Game with Serially Correlated Unobserved Production Costs SS223B-Empirical IO Motivation There have been substantial recent developments in the empirical literature on
More informationWeek 7 Quantitative Analysis of Financial Markets Simulation Methods
Week 7 Quantitative Analysis of Financial Markets Simulation Methods Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November
More informationChapter 7: Estimation Sections
1 / 40 Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: Chapter 7 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods:
More informationLoss Given Default: Estimating by analyzing the distribution of credit assets and Validation
Journal of Finance and Investment Analysis, vol. 5, no. 2, 2016, 1-18 ISSN: 2241-0998 (print version), 2241-0996(online) Scienpress Ltd, 2016 Loss Given Default: Estimating by analyzing the distribution
More informationExercises on the New-Keynesian Model
Advanced Macroeconomics II Professor Lorenza Rossi/Jordi Gali T.A. Daniël van Schoot, daniel.vanschoot@upf.edu Exercises on the New-Keynesian Model Schedule: 28th of May (seminar 4): Exercises 1, 2 and
More information2 Modeling Credit Risk
2 Modeling Credit Risk In this chapter we present some simple approaches to measure credit risk. We start in Section 2.1 with a short overview of the standardized approach of the Basel framework for banking
More informationCross-Sectional Distribution of GARCH Coefficients across S&P 500 Constituents : Time-Variation over the Period
Cahier de recherche/working Paper 13-13 Cross-Sectional Distribution of GARCH Coefficients across S&P 500 Constituents : Time-Variation over the Period 2000-2012 David Ardia Lennart F. Hoogerheide Mai/May
More informationKey Moments in the Rouwenhorst Method
Key Moments in the Rouwenhorst Method Damba Lkhagvasuren Concordia University CIREQ September 14, 2012 Abstract This note characterizes the underlying structure of the autoregressive process generated
More informationA comment on Christoffersen, Jacobs and Ornthanalai (2012), Dynamic jump intensities and risk premiums: Evidence from S&P500 returns and options
A comment on Christoffersen, Jacobs and Ornthanalai (2012), Dynamic jump intensities and risk premiums: Evidence from S&P500 returns and options Garland Durham 1 John Geweke 2 Pulak Ghosh 3 February 25,
More informationGMM for Discrete Choice Models: A Capital Accumulation Application
GMM for Discrete Choice Models: A Capital Accumulation Application Russell Cooper, John Haltiwanger and Jonathan Willis January 2005 Abstract This paper studies capital adjustment costs. Our goal here
More informationUPDATED IAA EDUCATION SYLLABUS
II. UPDATED IAA EDUCATION SYLLABUS A. Supporting Learning Areas 1. STATISTICS Aim: To enable students to apply core statistical techniques to actuarial applications in insurance, pensions and emerging
More informationInternal LGD Estimation in Practice
Internal LGD Estimation in Practice Peter Glößner, Achim Steinbauer, Vesselka Ivanova d-fine 28 King Street, London EC2V 8EH, Tel (020) 7776 1000, www.d-fine.co.uk 1 Introduction Driven by a competitive
More informationReturn Decomposition over the Business Cycle
Return Decomposition over the Business Cycle Tolga Cenesizoglu March 1, 2016 Cenesizoglu Return Decomposition & the Business Cycle March 1, 2016 1 / 54 Introduction Stock prices depend on investors expectations
More informationEvaluating Policy Feedback Rules using the Joint Density Function of a Stochastic Model
Evaluating Policy Feedback Rules using the Joint Density Function of a Stochastic Model R. Barrell S.G.Hall 3 And I. Hurst Abstract This paper argues that the dominant practise of evaluating the properties
More informationDiscussion of The Term Structure of Growth-at-Risk
Discussion of The Term Structure of Growth-at-Risk Frank Schorfheide University of Pennsylvania, CEPR, NBER, PIER March 2018 Pushing the Frontier of Central Bank s Macro Modeling Preliminaries This paper
More informationAssicurazioni Generali: An Option Pricing Case with NAGARCH
Assicurazioni Generali: An Option Pricing Case with NAGARCH Assicurazioni Generali: Business Snapshot Find our latest analyses and trade ideas on bsic.it Assicurazioni Generali SpA is an Italy-based insurance
More informationChoice Probabilities. Logit Choice Probabilities Derivation. Choice Probabilities. Basic Econometrics in Transportation.
1/31 Choice Probabilities Basic Econometrics in Transportation Logit Models Amir Samimi Civil Engineering Department Sharif University of Technology Primary Source: Discrete Choice Methods with Simulation
More informationApplication of MCMC Algorithm in Interest Rate Modeling
Application of MCMC Algorithm in Interest Rate Modeling Xiaoxia Feng and Dejun Xie Abstract Interest rate modeling is a challenging but important problem in financial econometrics. This work is concerned
More informationIs regulatory capital pro-cyclical? A macroeconomic assessment of Basel II
Is regulatory capital pro-cyclical? A macroeconomic assessment of Basel II (preliminary version) Frank Heid Deutsche Bundesbank 2003 1 Introduction Capital requirements play a prominent role in international
More informationMaturity, Indebtedness and Default Risk 1
Maturity, Indebtedness and Default Risk 1 Satyajit Chatterjee Burcu Eyigungor Federal Reserve Bank of Philadelphia February 15, 2008 1 Corresponding Author: Satyajit Chatterjee, Research Dept., 10 Independence
More informationHighly Persistent Finite-State Markov Chains with Non-Zero Skewness and Excess Kurtosis
Highly Persistent Finite-State Markov Chains with Non-Zero Skewness Excess Kurtosis Damba Lkhagvasuren Concordia University CIREQ February 1, 2018 Abstract Finite-state Markov chain approximation methods
More informationPractical example of an Economic Scenario Generator
Practical example of an Economic Scenario Generator Martin Schenk Actuarial & Insurance Solutions SAV 7 March 2014 Agenda Introduction Deterministic vs. stochastic approach Mathematical model Application
More informationEstimating the Natural Rate of Unemployment in Hong Kong
Estimating the Natural Rate of Unemployment in Hong Kong Petra Gerlach-Kristen Hong Kong Institute of Economics and Business Strategy May, Abstract This paper uses unobserved components analysis to estimate
More informationWORKING MACROPRUDENTIAL TOOLS
WORKING MACROPRUDENTIAL TOOLS Jesús Saurina Director. Financial Stability Department Banco de España Macro-prudential Regulatory Policies: The New Road to Financial Stability? Thirteenth Annual International
More informationCapital Constraints, Lending over the Cycle and the Precautionary Motive: A Quantitative Exploration
Capital Constraints, Lending over the Cycle and the Precautionary Motive: A Quantitative Exploration Angus Armstrong and Monique Ebell National Institute of Economic and Social Research 1. Introduction
More informationChapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29
Chapter 5 Univariate time-series analysis () Chapter 5 Univariate time-series analysis 1 / 29 Time-Series Time-series is a sequence fx 1, x 2,..., x T g or fx t g, t = 1,..., T, where t is an index denoting
More informationLecture 17: More on Markov Decision Processes. Reinforcement learning
Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture
More informationUnemployment Fluctuations and Nominal GDP Targeting
Unemployment Fluctuations and Nominal GDP Targeting Roberto M. Billi Sveriges Riksbank 3 January 219 Abstract I evaluate the welfare performance of a target for the level of nominal GDP in the context
More informationFE670 Algorithmic Trading Strategies. Stevens Institute of Technology
FE670 Algorithmic Trading Strategies Lecture 4. Cross-Sectional Models and Trading Strategies Steve Yang Stevens Institute of Technology 09/26/2013 Outline 1 Cross-Sectional Methods for Evaluation of Factor
More informationEffect of Firm Age in Expected Loss Estimation for Small Sized Firms
Proceedings of the Asia Pacific Industrial Engineering & Management Systems Conference 2015 Effect of Firm Age in Expected Loss Estimation for Small Sized Firms Kenzo Ogi Risk Management Department Japan
More informationExperience with the Weighted Bootstrap in Testing for Unobserved Heterogeneity in Exponential and Weibull Duration Models
Experience with the Weighted Bootstrap in Testing for Unobserved Heterogeneity in Exponential and Weibull Duration Models Jin Seo Cho, Ta Ul Cheong, Halbert White Abstract We study the properties of the
More informationEstimation of Volatility of Cross Sectional Data: a Kalman filter approach
Estimation of Volatility of Cross Sectional Data: a Kalman filter approach Cristina Sommacampagna University of Verona Italy Gordon Sick University of Calgary Canada This version: 4 April, 2004 Abstract
More informationCharacterization of the Optimum
ECO 317 Economics of Uncertainty Fall Term 2009 Notes for lectures 5. Portfolio Allocation with One Riskless, One Risky Asset Characterization of the Optimum Consider a risk-averse, expected-utility-maximizing
More informationProperties of the estimated five-factor model
Informationin(andnotin)thetermstructure Appendix. Additional results Greg Duffee Johns Hopkins This draft: October 8, Properties of the estimated five-factor model No stationary term structure model is
More informationAn Improved Skewness Measure
An Improved Skewness Measure Richard A. Groeneveld Professor Emeritus, Department of Statistics Iowa State University ragroeneveld@valley.net Glen Meeden School of Statistics University of Minnesota Minneapolis,
More informationAn Introduction to Bayesian Inference and MCMC Methods for Capture-Recapture
An Introduction to Bayesian Inference and MCMC Methods for Capture-Recapture Trinity River Restoration Program Workshop on Outmigration: Population Estimation October 6 8, 2009 An Introduction to Bayesian
More informationThe method of Maximum Likelihood.
Maximum Likelihood The method of Maximum Likelihood. In developing the least squares estimator - no mention of probabilities. Minimize the distance between the predicted linear regression and the observed
More informationShort-selling constraints and stock-return volatility: empirical evidence from the German stock market
Short-selling constraints and stock-return volatility: empirical evidence from the German stock market Martin Bohl, Gerrit Reher, Bernd Wilfling Westfälische Wilhelms-Universität Münster Contents 1. Introduction
More informationExplaining the Last Consumption Boom-Bust Cycle in Ireland
Public Disclosure Authorized Public Disclosure Authorized Public Disclosure Authorized Public Disclosure Authorized Policy Research Working Paper 6525 Explaining the Last Consumption Boom-Bust Cycle in
More informationGovernment spending and firms dynamics
Government spending and firms dynamics Pedro Brinca Nova SBE Miguel Homem Ferreira Nova SBE December 2nd, 2016 Francesco Franco Nova SBE Abstract Using firm level data and government demand by firm we
More informationAn Implementation of Markov Regime Switching GARCH Models in Matlab
An Implementation of Markov Regime Switching GARCH Models in Matlab Thomas Chuffart Aix-Marseille University (Aix-Marseille School of Economics), CNRS & EHESS Abstract MSGtool is a MATLAB toolbox which
More informationEstimation of a Ramsay-Curve IRT Model using the Metropolis-Hastings Robbins-Monro Algorithm
1 / 34 Estimation of a Ramsay-Curve IRT Model using the Metropolis-Hastings Robbins-Monro Algorithm Scott Monroe & Li Cai IMPS 2012, Lincoln, Nebraska Outline 2 / 34 1 Introduction and Motivation 2 Review
More informationThe relationship between output and unemployment in France and United Kingdom
The relationship between output and unemployment in France and United Kingdom Gaétan Stephan 1 University of Rennes 1, CREM April 2012 (Preliminary draft) Abstract We model the relation between output
More informationReturn to Capital in a Real Business Cycle Model
Return to Capital in a Real Business Cycle Model Paul Gomme, B. Ravikumar, and Peter Rupert Can the neoclassical growth model generate fluctuations in the return to capital similar to those observed in
More informationChapter 9 Dynamic Models of Investment
George Alogoskoufis, Dynamic Macroeconomic Theory, 2015 Chapter 9 Dynamic Models of Investment In this chapter we present the main neoclassical model of investment, under convex adjustment costs. This
More informationIntroduction Dickey-Fuller Test Option Pricing Bootstrapping. Simulation Methods. Chapter 13 of Chris Brook s Book.
Simulation Methods Chapter 13 of Chris Brook s Book Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 April 26, 2017 Christopher
More informationMarket Timing Does Work: Evidence from the NYSE 1
Market Timing Does Work: Evidence from the NYSE 1 Devraj Basu Alexander Stremme Warwick Business School, University of Warwick November 2005 address for correspondence: Alexander Stremme Warwick Business
More informationIdentifying Long-Run Risks: A Bayesian Mixed-Frequency Approach
Identifying : A Bayesian Mixed-Frequency Approach Frank Schorfheide University of Pennsylvania CEPR and NBER Dongho Song University of Pennsylvania Amir Yaron University of Pennsylvania NBER February 12,
More informationHOUSEHOLDS INDEBTEDNESS: A MICROECONOMIC ANALYSIS BASED ON THE RESULTS OF THE HOUSEHOLDS FINANCIAL AND CONSUMPTION SURVEY*
HOUSEHOLDS INDEBTEDNESS: A MICROECONOMIC ANALYSIS BASED ON THE RESULTS OF THE HOUSEHOLDS FINANCIAL AND CONSUMPTION SURVEY* Sónia Costa** Luísa Farinha** 133 Abstract The analysis of the Portuguese households
More informationMEASURING PORTFOLIO RISKS USING CONDITIONAL COPULA-AR-GARCH MODEL
MEASURING PORTFOLIO RISKS USING CONDITIONAL COPULA-AR-GARCH MODEL Isariya Suttakulpiboon MSc in Risk Management and Insurance Georgia State University, 30303 Atlanta, Georgia Email: suttakul.i@gmail.com,
More informationCredit Risk Management: A Primer. By A. V. Vedpuriswar
Credit Risk Management: A Primer By A. V. Vedpuriswar February, 2019 Altman s Z Score Altman s Z score is a good example of a credit scoring tool based on data available in financial statements. It is
More informationThe Multinomial Logit Model Revisited: A Semiparametric Approach in Discrete Choice Analysis
The Multinomial Logit Model Revisited: A Semiparametric Approach in Discrete Choice Analysis Dr. Baibing Li, Loughborough University Wednesday, 02 February 2011-16:00 Location: Room 610, Skempton (Civil
More informationOnline Appendix: Asymmetric Effects of Exogenous Tax Changes
Online Appendix: Asymmetric Effects of Exogenous Tax Changes Syed M. Hussain Samreen Malik May 9,. Online Appendix.. Anticipated versus Unanticipated Tax changes Comparing our estimates with the estimates
More informationThe Impact of Macroeconomic Uncertainty on Commercial Bank Lending Behavior in Barbados. Ryan Bynoe. Draft. Abstract
The Impact of Macroeconomic Uncertainty on Commercial Bank Lending Behavior in Barbados Ryan Bynoe Draft Abstract This paper investigates the relationship between macroeconomic uncertainty and the allocation
More informationMoney Market Uncertainty and Retail Interest Rate Fluctuations: A Cross-Country Comparison
DEPARTMENT OF ECONOMICS JOHANNES KEPLER UNIVERSITY LINZ Money Market Uncertainty and Retail Interest Rate Fluctuations: A Cross-Country Comparison by Burkhard Raunig and Johann Scharler* Working Paper
More information[D7] PROBABILITY DISTRIBUTION OF OUTSTANDING LIABILITY FROM INDIVIDUAL PAYMENTS DATA Contributed by T S Wright
Faculty and Institute of Actuaries Claims Reserving Manual v.2 (09/1997) Section D7 [D7] PROBABILITY DISTRIBUTION OF OUTSTANDING LIABILITY FROM INDIVIDUAL PAYMENTS DATA Contributed by T S Wright 1. Introduction
More informationAgricultural and Applied Economics 637 Applied Econometrics II
Agricultural and Applied Economics 637 Applied Econometrics II Assignment I Using Search Algorithms to Determine Optimal Parameter Values in Nonlinear Regression Models (Due: February 3, 2015) (Note: Make
More informationA Simple Approach to Balancing Government Budgets Over the Business Cycle
A Simple Approach to Balancing Government Budgets Over the Business Cycle Erick M. Elder Department of Economics & Finance University of Arkansas at ittle Rock 280 South University Ave. ittle Rock, AR
More informationOmitted Variables Bias in Regime-Switching Models with Slope-Constrained Estimators: Evidence from Monte Carlo Simulations
Journal of Statistical and Econometric Methods, vol. 2, no.3, 2013, 49-55 ISSN: 2051-5057 (print version), 2051-5065(online) Scienpress Ltd, 2013 Omitted Variables Bias in Regime-Switching Models with
More informationTOPICS IN MACROECONOMICS: MODELLING INFORMATION, LEARNING AND EXPECTATIONS LECTURE NOTES. Lucas Island Model
TOPICS IN MACROECONOMICS: MODELLING INFORMATION, LEARNING AND EXPECTATIONS LECTURE NOTES KRISTOFFER P. NIMARK Lucas Island Model The Lucas Island model appeared in a series of papers in the early 970s
More informationCredit Shocks and the U.S. Business Cycle. Is This Time Different? Raju Huidrom University of Virginia. Midwest Macro Conference
Credit Shocks and the U.S. Business Cycle: Is This Time Different? Raju Huidrom University of Virginia May 31, 214 Midwest Macro Conference Raju Huidrom Credit Shocks and the U.S. Business Cycle Background
More information1 Introduction. Term Paper: The Hall and Taylor Model in Duali 1. Yumin Li 5/8/2012
Term Paper: The Hall and Taylor Model in Duali 1 Yumin Li 5/8/2012 1 Introduction In macroeconomics and policy making arena, it is extremely important to have the ability to manipulate a set of control
More informationCash holdings determinants in the Portuguese economy 1
17 Cash holdings determinants in the Portuguese economy 1 Luísa Farinha Pedro Prego 2 Abstract The analysis of liquidity management decisions by firms has recently been used as a tool to investigate the
More informationEXAMINING MACROECONOMIC MODELS
1 / 24 EXAMINING MACROECONOMIC MODELS WITH FINANCE CONSTRAINTS THROUGH THE LENS OF ASSET PRICING Lars Peter Hansen Benheim Lectures, Princeton University EXAMINING MACROECONOMIC MODELS WITH FINANCING CONSTRAINTS
More informationدرس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی
یادگیري ماشین توزیع هاي نمونه و تخمین نقطه اي پارامترها Sampling Distributions and Point Estimation of Parameter (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی درس هفتم 1 Outline Introduction
More informationSample Size for Assessing Agreement between Two Methods of Measurement by Bland Altman Method
Meng-Jie Lu 1 / Wei-Hua Zhong 1 / Yu-Xiu Liu 1 / Hua-Zhang Miao 1 / Yong-Chang Li 1 / Mu-Huo Ji 2 Sample Size for Assessing Agreement between Two Methods of Measurement by Bland Altman Method Abstract:
More informationA Convenient Way of Generating Normal Random Variables Using Generalized Exponential Distribution
A Convenient Way of Generating Normal Random Variables Using Generalized Exponential Distribution Debasis Kundu 1, Rameshwar D. Gupta 2 & Anubhav Manglick 1 Abstract In this paper we propose a very convenient
More informationMM and ML for a sample of n = 30 from Gamma(3,2) ===============================================
and for a sample of n = 30 from Gamma(3,2) =============================================== Generate the sample with shape parameter α = 3 and scale parameter λ = 2 > x=rgamma(30,3,2) > x [1] 0.7390502
More informationAsymmetric Price Transmission: A Copula Approach
Asymmetric Price Transmission: A Copula Approach Feng Qiu University of Alberta Barry Goodwin North Carolina State University August, 212 Prepared for the AAEA meeting in Seattle Outline Asymmetric price
More information