Technische Universiteit Delft Faculteit Elektrotechniek, Wiskunde en Informatica Delft Institute of Applied Mathematics

Size: px
Start display at page:

Download "Technische Universiteit Delft Faculteit Elektrotechniek, Wiskunde en Informatica Delft Institute of Applied Mathematics"

Transcription

1 Technische Universiteit Delft Faculteit Elektrotechniek, Wiskunde en Informatica Delft Institute of Applied Mathematics Het nauwkeurig bepalen van de verlieskans van een portfolio van risicovolle leningen (Engelse titel: Accurately determining the loss probability of a portfolio of risky loans) Verslag ten behoeve van het Delft Institute for Applied Mathematics als onderdeel ter verkrijging van de graad van BACHELOR OF SCIENCE in TECHNISCHE WISKUNDE door JELLE STRAATHOF Delft, Nederland November 2011 Copyright c 2011 door Jelle Straathof. Alle rechten voorbehouden.

2

3 BSc verslag TECHNISCHE WISKUNDE Het nauwkeurig bepalen van de verlieskans van een portfolio van risicovolle leningen (Engelse titel: Accurately determining the loss probability of a portfolio of risky loans JELLE STRAATHOF Technische Universiteit Delft Begeleider Dr.ir. L.E. Meester Overige commissieleden Dr. J.G. Spandaw Prof. dr. ir. C.W. Oosterlee November, 2011 Delft

4

5 Contents 1 Introduction Methodology Linear Model of Default Indicators Normal Copula Factor Model Simulation Monte Carlo Simulation Confidence Intervals Parameters of the Simulation Results of the Standard Monte Carlo Simulation Assessing the Variance Reduction and Batch Simulation Importance Sampling Method 1: Exponential Tilting Results of Exponential Tilting Method 2: Factor Twisting Results of the Factor Twisting Method 3: Zero-Variance Distribution Fitting the Zero-Variance Distribution Results of the zero-variance distribution Expected Shortfall for Importance Sampling 17 5 Variations in the model Reducing the default probability Reducing the loan size Increasing the parameters of the portfolio Conclusion 19 7 Appendix A 20 8 Appendix B 21 9 Appendix C Appendix C2 23 References 24 4

6 1 Introduction This project will evaluate different methods of importance sampling that can be used by banks to determine the loss probability of a portfolio of risky loans. Banks use portfolios to group together similar financial products, so that they can be more easily analyzed and traded. These portfolios can hold many different sorts of products, including loans. A loan is created when one party, named the lender, lends money to a second party, called an obligor. The obligor needs to repay it in a certain amount of time. It follows that an obligor may not be able to repay the loan: this is called a default. In this situation, the most basic problem... is determining the distribution of losses from default [Glasserman, 2004]. From this distribution of losses, a number of characteristics of a portfolio can be calculated. One of these is the loss probability, which is the probability that the loss of the portfolio is greater than a fixed number x: P(L > x), where L is the loss of the portfolio and x is called the default threshold. Another property is the expected shortfall, or E[L L > x]. This is the expected loss, given that the loss is greater than the default threshold. In order to determine the distribution of losses, mathematical models are used by banks. One of these is a linear model of default indicators. 1.1 Methodology To determine the loss probability of a portfolio of loans, a model of default indicators is used as described in the book by Glasserman [Glasserman, 2004]. Monte Carlo simulation is used to estimate the loss probability and the expected shortfall of a portfolio. The simulation is performed for select test cases so that results can be compared. A technique called importance sampling is then used to increase the accuracy of the results. This is done by drawing samples of the necessary variables from a new distribution in which there is a higher probability of obligors defaulting on a loan. The zero-variance distribution is the theoretical distribution that produces results with zero variance. This distribution is approximated and then used as the new distribution in the importance sampling to maximize the accuracy of the results. These variance reduction techniques are then applied to portfolios with different values for the default probability and the sizes of the loans. These techniques are used to find out how to maximize the variance reduction and so produce the most accurate results. The results presented will be the factor that each type of importance reduces the variance of the standard Monte Carlo simulation. This will make it simple to determine which method results in the greatest variance reduction. 1.2 Linear Model of Default Indicators A portfolio is a collection of loans held by a bank. Suppose the variable m represents the number of loans in the portfolio, and the variable c i is the value of the loan of the i th obligor, with i = 1,..., m. Each loan has an associated default probability, given by p i (for the i th obligor). The default indicator Y i takes the value 0 if obligor i has repaid the loan, or the value 1, if the obligor has not repaid the loan. The loss of the portfolio is then given by L = i Y ic i. In order to determine whether Y i is 1 or 0, each obligor has a corresponding variable X i that reflects the state of the loan. The default indicators are represented as: Y i = 1 {Xi >x i }, with x i the default threshold of obligor i. Thus, if the state variable X i is larger than the default threshold, the loan is not repaid. At this point, the obligors are independent of one another. In order to better reflect reality where there are often economic factors at play that affect all or most obligors, the model needs to be expanded so that the obligors are dependent. A factor 5

7 model is used to incorporate a dependence structure among the obligors, and a normal copula is used to transfer this structure to the default indicators. 1.3 Normal Copula Copulas are a mathematical mechanism of isolating the dependence structure between random variables. If the random vector X = (X 1,..., X d ) has the distribution function F with marginals F 1,..., F d, then the copula of F is the distribution function C of (F 1 (X 1 ),..., F d (X d )). According to Sklar s theorem, if the marginals F i of the distribution function F are continuous, then there is a unique function C : [0, 1] d [0, 1] with: F (X 1,..., X d ) = C(F 1 (X 1 ),..., F d (X d )) The vector (X 1,..., X m ) of correlated N(0,1) random variables can be used to create dependent uniform random variables U i = Φ 1 (X i ), where i = 1,..., m. Then, U 1,..., U m can be used to generate more dependent random variables. This copula is called a normal copula because of the use of the normal distribution function. The correlation between U i and U j, when i j, is called the copula correlation. 1.4 Factor Model Suppose there are m obligors, with associated random variables (X 1,..., X m ) N(0, I m ) and default thresholds x i = Φ 1 (1 p i ), where p i is the default probability of the obligor i. The next step is to determine the default indicators Y i = 1 {Xi >x i }, when X i are dependent. In order to introduce a dependence structure between the random variables X i, a 1-factor model is used. The following theory is based on Glasserman [Glasserman, 2004, Section 9.3]: let Z N(0, 1) be an independent common factor that affects all obligors, and let ɛ i N(0, 1) be the independent movement of an obligor. Represent this in the random variables X 1,..., X m with the form: X i = ρz + 1 ρ 2 ɛ i, i = 1,..., m Each ɛ i is independent of the others. The ρ is called the factor loading; in addition, the correlation between X i and X j, with i j, is ρ 2. The variance of X i gives the following: Var(X i ) = Var(ρZ + 1 ρ 2 ɛ i ) = ρ 2 Var(Z) + (1 ρ 2 )Var(ɛ i ) = ρ 2 + (1 ρ 2 ) = 1 The variance of the state variables, X i, is unchanged by the introduction of the factor model. Given Z = z, the conditional probability of default becomes: ( p i = P(Y i = 1 Z = z) = P(X i > x i Z = z) = P ρz + ( ) ) 1 ρ 2 x i ρz ɛ i > x i Z = z = 1 Φ 1 ρ 2 The last equals sign is correct because ɛ i N(0, 1). Given the common factor Z, the problem reduces to the independent case; when Z is fixed, the only variation in the state variables comes from the independent variables ɛ i. Conditionally, the portfolio loss L is a sum of m independent random variables. 6

8 2 Simulation 2.1 Monte Carlo Simulation Monte Carlo simulation is a technique used in financial mathematics to analyze complex instruments, investments and portfolios by simulating the uncertainty that affects their value. This is done by repeating the simulation thousands of times, and sampling from the distributions within the simulation each time. For example: α = 1 0 f(x)dx means that α = E[f(U)], where U U[0, 1]. If one were to draw realizations of this distribution, U 1,..., U n, then an estimate of α is given by ˆα n = 1 n n f(u i ) i=1 The law of large numbers implies that this value ˆα n converges to α as n with probability 1. Furthermore, set Var(f(U)) = σf 2 = 1 0 (f(x) α)2 dx. The error ˆα n α is approximately normally distributed with mean 0 and standard deviation σ f / n. 2.2 Confidence Intervals The confidence intervals that are presented are determined by simulating the loss probability 10,000 times. From these values, the 95% confidence interval is created: [ ] µ n 1.96 sn n µ n sn n where µ n is the arithmetic mean of n replications, and s n is the sample standard deviation. The standard error is given by s n / n. 2.3 Parameters of the Simulation To allow easy comparison between results, the portfolio is homogenized. The portfolio that is evaluated has m = 1000 obligors, all with the same loan, so the size of the loan is c i = 1 and the default probability is p i = for each obligor i. In addition, a number of test cases for the factor model are chosen. In order to compare the simulations of the test cases the desired loss probability needs to be fixed. The loss probability can be influenced by the default threshold of the portfolio, x. Large losses in the portfolio are events that do not occur frequently, and so rare-event simulation must be used to simulate these losses. The simulation will look for the 1% loss probability of the portfolio and its standard deviation; the corresponding default thresholds suggested by Glasserman [Glasserman, 2004] for each test case are as follows: ρ x These default thresholds give a loss probability that is very close to 1%. 2.4 Results of the Standard Monte Carlo Simulation The program used to simulate the model begins by randomly selecting a common factor for each replication. Then the default probability, p i, from Section 1.4, is calculated. The default indicators, Y i, are determined by creating a vector V with size m, the number of obligors, and 7

9 then determining whether or not V < p, where p is the vector of p i. The loss is then calculated with L = Y i c i, with L representing the loss. This simulation is repeated 10,000 times so that there exist 10,000 different instances of L. The value L > x shows when a portfolio has crossed the default threshold x, and so the loss probability is found by summing all these instances of default and dividing by the amount of repetitions. Simulating the portfolio for each test case produces the following: Loss Probability ρ Sample Mean Sample Standard Deviation Sample Standard Error It appears that the standard error is too large to give an accurate loss probability, given that it is between 20% and 33% of the mean of the loss probability. However, this can be remedied through the use of variance reduction techniques that are described in Chapter Assessing the Variance Reduction and Batch Simulation The use of Monte Carlo simulation brings with it the option of using a number of variance reduction techniques that can reduce the variance of the simulation, sometimes drastically. By reducing the variance, a choice can be made between accuracy and speed while simulating: if a specific accuracy is needed for the results, the variance can be reduced until the desired accuracy is achieved; if speed is desired, then the amount of replications can be lowered while maintaining the desired accuracy. One importance sampling technique uses an antithetic variable: this is a second identically distributed random variable that combines with the first to reduce the variance. Another technique uses a control variate, a similarly distributed random variable whose expectation is known; the variance is reduced through the correlation between the two variables. A third method called importance sampling uses a new distribution chosen specifically to increase the probability of the interesting event occurring, namely an obligor defaulting. This is the method that will be used and further explored in the next chapter. The variance reduction is determined by dividing the variance of the standard Monte Carlo simulation by the variance of the new simulation; this gives the factor by which the new variance is smaller than the variance of the standard Monte Carlo. The standard error of the simulated variance reduction is calculated by splitting the simulation into 201 batches, each of 1000 repetitions, and then determining the 95% confidence interval of the variance reduction. This works as follows: suppose the random variable X, with E[X] = µ and Var(X) = σ 2, is simulated repeatedly, producing n batches of k replications of X. Then the total number of replications is m = nk. Given that the result of replication j of batch i is X ij, with i = 1,..., n and j = 1,..., k, the mean of batch i is given by: X i = 1 k X ij k From this the overall mean is determined by: j=1 X = X X n n 8

10 The standard error of X is σ m. The random variable X i has an expected value of E[X i ] = µ and a variance of σb 2 = Var(X i) = σ2 k. By the Central Limit Theorem, if k is large then X i N(µ, σb 2). An unbiased estimator for σ2 b is given by: s 2 b = 1 n (X i X) 2 n 1 i=1 Then ˆσ 2 = s 2 b k is an unbiased estimator for σ2. If k is large enough to apply the Central Limit Theorem, then ˆσ 2 has a scaled χ 2 (n 1) distribution. Following from the definition of χ 2 distributions, if the random variables Y 1,..., Y m are independent identically distributed N(µ, σ 2 ), then for s 2 = ( 1 m 1 ) (Y i Y ) 2, where Y is the mean of Y 1,..., Y m, it is true that (m 1)s 2 χ 2 (m 1). σ 2 The variance of the estimator ˆσ 2 is given by: Var(ˆσ 2 ) = Var(s 2 b ( k) σ 2 = Var b k (n 1)s 2 ) b n 1 σb 2 ( ) k 2 ( (n 1)s 2 ) = n 1 σ2 b Var b σb 2 ( ) k 2 = n 1 σ2 b 2(n 1) = 2σ4 n 1 So the standard error for a simulation with n batches of k repetitions is given by s.e.(ˆσ 2 ) = 2 n 1 σ2. The relative standard error is the standard error divided by the true value of the parameter. Thus, the relative standard error is 2 n 1. If k is large, the Central Limit Theorem can be used to draw the above conclusions; in this simulation k = 1000 and n = 201, so that the relative standard error is 10%. 3 Importance Sampling Importance sampling can be a powerful technique in reducing the variance of a sample. It emphasizes important values that increase the probability of seeing a particular event, in this case, a default. The following theory is based on Glasserman [Glasserman, 2004, Section 9.3] and Glasserman & Li [Glasserman, 2005]. A biased distribution is chosen to sample from, and the samples of this distribution are then weighed to correct for the use of a bias. Suppose α = E[h(X)], with X a continuous random variable and h a function. Then E[h(X)] = h(x)f(x)dx with f the density function of X. Name g a density function such that g(x) > 0 when f(x) > 0, then α can be written as: α = h(x) f(x) g(x) g(x)dx The α is now found by sampling from a new density: α = use of the biased density g. The fraction f(x) g(x) f(x) Ẽ[h(X) g(x) ]. The tilde denotes the is called the likelihood ratio. If g is chosen well, 9

11 the second moment, and therefore the variance, can be reduced: [ ( Ẽ h(x) f(x) ) ] 2 ] [ = [h(x) g(x) Ẽ 2 f(x)2 g(x) 2 = E h(x) 2 f(x) ] g(x) This last value can be greater or smaller than E[h(X) 2 ] depending on g. When simulating a portfolio of loans, the density function g can be chosen using different methods of importance sampling. Three methods will be used and analyzed as part of this project: exponential tilting, factor twisting with a normal distribution, and factor twisting with the zerovariance distribution. These will be called Method 1, 2, and 3, respectively. The results of the variance reduction are presented without standard errors; however, the number of batches used in the simulation translates into a standard error of approximately 10% of the variance reduction. For more detailed results, consult Appendix A. 3.1 Method 1: Exponential Tilting Exponential tilting is a specific type of importance sampling. In order to understand how to use it, some other terms must be explained. Define the cumulant generating function of the distribution function F as follows: ψ(θ) = log( eθx df (x)); this is the logarithm of the moment generating function of F ; it is defined for all θ for which ψ(θ) remains finite. It can also be written as: ψ(θ) = log E[e θx ] The exponential tilting of F is defined as: F θ (x) = x eθu ψ(θ) df (u). Then each F θ is a probability distribution and F = F 0. If F has a density f, then F θ has density function f θ (x) = e θx ψ(θ) f(x) [Glasserman, 2004, Section 4.6.2]. Given independent, identically distributed random variables X 1,..., X n, with probability distribution F = F 0, the likelihood ratio for changing the distribution to F θ is: n i=1 df 0 (X i ) n df θ (X i ) = i=1 It follows from the definition of ψ(θ) that f(x i ) n e θxi ψ(θ) f(x i ) = exp( θ X i + nψ(θ)) ψ (θ) = d ) (E dθ log 0 [e θx ] = E 0[Xe θx ] E 0 [e θx = E 0 [Xe θx ]e ψ(θ) = E 0 [Xe θx ψ(θ) ] = E θ [X] ] and ψ (θ) = Var θ (X) following a similar calculation. E θ [X] is the expectation of X when tilted, and Var θ (X) is the variance of X when tilted. As stated in the description of the dependent model, conditioning on Z reduces the problem to the independent case. Therefore, the calculations of this section can be used on the dependent model when conditional on Z. Write φ as follows: φ L Z (θ) = log(e[e θl Z]). Recall that L = c i Y i and that Y i = 1 with probability p i and Y i = 0 with probability 1 p i. This means that [ φ L Z (θ) = log(e[e θ m ] m Y i c i Z]) = log(e e θy ic i Z ) = log( p i e ciθ + (1 p i )) i=1 The optimal θ for reducing the variance is determined by solving φ L Z ( θ x ) = x for a particular x, which follows from the calculations in Glasserman [Glasserman, 2004, Section 9.2.2]. Name i=1 i=1 10

12 φ i (θ) = log( p i e c iθ +(1 p i )). It follows from E θ [L] = x that E θ [Y i c i ] = φ i ( θ x ) [Glasserman, 2004, Section 9.2.2]. Then the tilted default probability for each obligor is: p i ( θ x ) = φ i ( θ x ) c i = p i e θ xc i p i e θ xc i + 1 pi In order to transform the results from the simulation, they need to be multiplied by the likelihood ratio of the exponential twist. The estimator of the loss probability becomes e θ xl+φ L Z ( θ x) 1 {L>x} Results of Exponential Tilting The exponential tilting is implemented in the simulation by solving φ L Z ( θ x ) = x in order to calculate θ x, and φ L Z is then determined using θ x. The tilted default probability, p i ( θ x ), can then be found using the formula in Section 3.1. The indicators Y i and the portfolio loss L are determined in the same manner as the standard Monte Carlo simulation, but the estimator 1 {L>x} is multiplied by the likelihood ratio e θ xl+φ L Z ( θ x) to account for the exponential tilting. The variance of the estimator from the standard Monte Carlo simulation is then divided by the variance of the estimator with exponential tilting to determine the variance reduction. The simulation with the help of exponential tilting produces the following variance reductions: ρ Variance Reduction In the first test case where ρ = 0.05, there is a reasonably high variance reduction, but the other values are not of the same magnitude. Thus, another method of importance sampling is needed: a technique called factor twisting is chosen. 3.2 Method 2: Factor Twisting It is now possible to apply importance sampling to the normally distributed common factor Z. From the law of total variance, also known as the variance decomposition formula, it follows that the variance of the estimator given above in Section 3.1 is equal to Var(e θ xl+φ L Z ( θ x) 1 {L>x} ) = E[Var(e θ xl+φ L Z ( θ x) 1{L > x} Z)] + Var(P(L > x Z)) This shows that tilting Y i conditional on Z makes the first term small but has no effect on the second. Using importance sampling on Z may reduce this second term. Introduce the mean µ so that Z is realized from a N(µ,I m ) distribution instead of a standard normal distribution; the likelihood ratio for this change is e µz+ 1 2 µ2. Multiplying this likelihood ratio with the estimator of the estimator of the loss probability for exponential tilting, the complete estimator for the tilted and twisted distribution becomes e µz+ 1 2 µ2 θ xl+φ L Z ( θ x) 1 {L>x} 11

13 The implementation of this step means that the simulation of the model can be checked: a graph in Glasserman & Li [Glasserman, 2004] can be reproduced to confirm that the simulation works correctly. Figure 1 shows the variance reduction of the test cases, with the common factor having a standard normal distribution with different means as produced by Glasserman. The reproduction in Figure 2 confirms that the simulations were performing correctly. The reproduction also confirms the results of the exponential tilting, as those are the same variance reductions shown in the graph at µ = 0. Figure 1: Variance reduction using change of mean from Glasserman Figure 2: Variance reduction using change of mean reproduction by Jelle Straathof Results of the Factor Twisting Using a similar approach as with the exponential tilting, the factor twisting adjusts the common factor Z with the mean µ so that it samples from a N(µ, 1) distribution. The new default probability p i ( θ x ) is calculated, after which the default indicators and the losses are determined as described in Section The estimator from Section 3.2 is calculated and its variance is determined. Once again the variance of the estimator of the loss probability from the standard Monte Carlo simulation is divided by the variance of the estimator of the loss probability from the tilted and twisted simulation to determine the variance reduction. In Figures 1 and 2, the maximum variance reduction is as follows: Variance Reduction ρ µ Method 1 & 2 Method The optimal µ is determined by calculating the variance reduction for each µ between 0 and 2 for the test case ρ = 0.05, between 0 and 2.5 for the test case ρ = 0.1 and between 0 and 4 for the test cases ρ = 0.25, 0.5, 0.8. These variance reductions are between 2 and 40 times 12

14 better than the ones achieved with only the exponential tilting. However, with the change in distribution for the common factor, there may be a better type of distribution than the shifted standard normal distribution. The zero-variance distribution provides an option of looking for such a distribution. 3.3 Method 3: Zero-Variance Distribution The change in distribution as considered in the previous section comes from the theory of the zero-variance distribution, mentioned in Glasserman & Li [Glasserman, 2005]. This states that there is a distribution of the common factor that results in an estimator of the loss probability with a variance of zero. With the use of the importance sampling theory, where: h(x)f(x) I = E[h(X)] = g(x)dx g(x) so that if the random variable Y has the distribution function g: I = E[h(Y )LR(Y )] where LR(Y ) is the likelihood ratio for using Y, and thus: Var(h(Y )LR(Y )) = ( h(x) f(x) ) 2 g(x) I g(x)dx In addition, with h non-negative, the product h(x)f(x) is non-negative. This means that it can be normalized to a probability density. Supposing that g is this density, and g(x) is proportional to h(x)f(x), then the variance is equal to zero. In this case, the constant of proportionality is 1 I. Applying this to the situation of the loan portfolio, name h(y) the loss probability P(L > x Z = z) and f(y) is the original density of the common factor, which is the standard normal density. Then the zero-variance distribution of the common factor should have a density proportional to: z P(L > x Z = z) exp( z 2 /2) The goal is to simulate the loss probability P(L > x Z = z), and it is needed to determine the zero-variance distribution. Therefore, the loss probability needs to be approximated so that the zero-variance distribution can also be approximated. In this case, the normal approximation for the loss probability is used, so P(L > x Z = z) is replaced: ( ) x E[L Z = z] P(L > x Z = z) 1 Φ Var(L Z = z) noting that E[L Z = z] = k p k(z)c k and Var(L Z = z) = k c2 k p k(z)(1 p k (z)). The accuracy of this approximation can be seen in Figures 3 through 7, where the loss probability and its 95% confidence interval is plotted against the approximation of the loss probability. The normal approximation is in all cases larger than the simulated loss probability. However, as rho increases, the approximation approaches the simulated loss probability. Additionally, Glasserman recommends fitting a normal distribution with a mean equal to the mode of the optimal density [Glasserman, 1999]. This step suggests that using a different distribution that fits the calculated density better might increase the variance reduction. 13

15 Figure 3: Approximation of the loss probability for ρ = 0.05 Figure 4: Approximation of the loss probability for ρ = 0.1 Figure 5: Approximation of the loss probability for ρ = 0.25 Figure 6: Approximation of the loss probability for ρ = 0.5 After determining the zero-variance distribution, it became clear that a shifted lognormal distribution would fit well. In order to determine the parameters of both the normal and lognormal fits of the zero-variance distribution, the moment-method was used. This method uses the results of the simulation as a sample from which to determine the mean and standard deviation of the normal and lognormal distributions that fit the results the best. Therefore, the first parameter of the normal distribution, µ, is the mean of the simulated values, and the second parameter, σ, is the sample standard deviation of the simulated values. The lognormal parameters were determined by first using the moment-method to calculate the mean, standard deviation and skewness of the results, and then changing these to the parameters of the lognormal distribution. The exact transformations can be seen in Appendix B. 14

16 Figure 7: Approximation of the loss probability for ρ = Fitting the Zero-Variance Distribution The estimator of the loss probabilities, when the common factor is lognormally distributed, is very close to the loss probabilities shown in Glasserman [Glasserman, 2004]: Loss Probability ρ Glasserman Sample Mean ± Sample Standard Error ± ± ± ± ± This means that the lognormal distribution of the common factors can be used for reducing the variance of the estimator of the loss probability. As seen in Figures 8 through 12, when ρ is small, there is very little difference between the simulated distribution and the fitted normal and lognormal distributions. However, at higher values of ρ, the simulated distribution becomes more skewed to the left, and as such, the lognormal distribution becomes a better fit Results of the zero-variance distribution The zero-variance distribution is close to a shifted lognormal distribution. The simulation for the estimator of the loss probability and its variance with the zero-variance distribution is the same as the simulation for the factor twisting, except that the common factor takes on the shifted lognormal dsitribution instead of the shifted normal distribution. The variance reduction achieved with a lognormally distributed common factor is as follows: Variance Reduction ρ Method 1 & 2 Method 1 &

17 Figure 8: Normal and lognormal fits of ρ = 0.05 distribution Figure 9: Normal and lognormal fits of ρ = 0.1 distribution Figure 10: Normal and lognormal fits of ρ = 0.25 distribution Figure 11: Normal and lognormal fits of ρ = 0.5 distribution This suggests that shifted lognormally distributed common factor should be used because the variance is only slightly smaller at low ρ, when compared to the shifted normal distribution. When ρ is low the approximation of the conditional loss probability used to determine the new distribution of the common factor is not accurate, as seen in Figures 3 through 7 With increasing ρ the lognormal distribution shows increased variance reduction compared to the variance reduction achieved by the shifted normal distribution. More detailed results can be found in Appendix A, as well as the relative variance reduction for the different methods. 16

18 Figure 12: Normal and lognormal fits of ρ = 0.8 distribution 4 Expected Shortfall for Importance Sampling The expected shortfall has also been calculated for the test cases, with the corresponding variance reduction compared to standard Monte Carlo simulation. All the following results are with 10,000 replications. The standard Monte Carlo gives: Expected Shortfall ρ Sample Mean Sample Standard Error The variance reduction of the expected shortfall for the different methods of importance sampling is as follows: Variance Reduction ρ Method 1 Method 1 & 2 Method 1 & Variance reduction for the expected shortfall follows a similar pattern as seen with the loss probability in variance reduction in Section The change-of-mean of the normal distribution provides less variance reduction than the lognormal distribution of the common factor. 17

19 5 Variations in the model The results described in Chapters 3 & 4 were all based on the assumption that the portfolio was homogeneous with a loss probability p i = and loans worth c i = 1. To confirm the robustness of the model these values can be changed. Appendix C1 shows the results for default probabilities of p i = 0.001, 0.002, Appendix C2 shows the results for the loan sizes c i = 0.5, 1, Reducing the default probability Reducing the default probability of the obligors, so that p i = and c i = 1, results in a standard Monte Carlo simulation with very few defaults occuring. This in turn causes the importance sampling to greatly increase the number of defaults and so increases the variance reduction compared to a portfolio with p i = and c i = 1, for both the loss probability and the expected shortfall. For example, when ρ = 0.05, the variance reduction for the loss probability is for Methods 1 & 2 and for Methods 1 & 3, which is 33 and 43 times better than when p i = Although the loss probability still achieves the greatest variance reduction by using lognormally distributed common factor, the variance reduction for the expected shortfall is greater when using the shifted normal distribution for the common factor. 5.2 Reducing the loan size Using importance sampling on a portfolio with p i = and c i = 0.5 also results in greater variance reduction in a number of test cases compared to a portfolio with larger loan sizes. In this case however, the variance reduction for the test case ρ = 0.05 results in a variance reduction of 0. This is because no defaults occur in the standard Monte Carlo simulation and so there is a variance of 0 for both the loss probability and the expected shortfall. In addition, when using the lognormally distributed common factor, the variance reduction for ρ = 0.5, 0.8 is lower than the variance reduction achieved with a portfolio with p i = and c i = 1, by a factor 2 and 2.3 respectively, for the loss probability, and a factor 2.2 and 1.4, respectively, for the expected shortfall. As such, the best distribution to use for the common factor is the shifted normal distribution. 5.3 Increasing the parameters of the portfolio Greater p i and c i results in smaller variance reduction for the importance sampling methods used on both the loss probability and the expected shortfall. The variance reduction increases for increasing ρ both the loss probability and the expected shortfall. The smaller variance reduction is because more defaults occur in the standard Monte Carlo simulation, and so the importance sampling cannot reduce the variance as much as when the portfolio has the parameters p i = and c i = 1. For the portfolios with p i = 0.004, c i = 1 and p i = 0.002, c i = 2, using the shifted normal distribution results in greater variance reduction than using the lognormal distribution for the common factor. These results also suggest that both methods of changing the distribution of the common factor should be used so that the most accurate results can be achieved. However, the zero-variance 18

20 distribution used to determine the variance reductions for the different portfolios was optimized for p i = and c i = 1. Determining the zero-variance distribution for each of the different portfolios may result in greater variance reductions. 6 Conclusion Loan portfolios are important financial instruments for banks. They allow them to easily group similar products and to determine the outcomes of many products. It is necessary to provide accurate information regarding the losses that can be suffered by the bank. This can be done by modeling a portfolio and using Monte Carlo simulation to calculate the results, such as the loss probability and the expected shortfall. These results are only useful if the bank can determine whether or not they are accurate. It appears that Monte Carlo simulations combined with importance sampling yields accurate results for the loss probability of a portfolio of loans. Using the zero-variance distribution increases the reliability of the simulation by reducing the variance the most. The results are optimized for reducing the variance of the loss probability, so using a zero-variance distribution optimized for the expected shortfall could produce greater variance reduction. Further research can be done into using a better approximation of the loss probability than the normal approximation used here. Other areas that can be investigated further are the results of the variance reduction on other characteristics of the portfolio, such as the second moment of (L x) given that L > x, as well the use of other variance reduction techniques on such a portfolio. 19

21 7 Appendix A Presented here are the variance reductions generated by simulating with 201 batches, which should give a standard error of 10%. Due to the nature of simulation, the standard errors will not be exactly 10%. The variance reduction achieved by each method of importance sampling is shown. In the cases of Methods 2 and 3, the variance reduction was calculated by determining the difference between the variance reduction achieved with Method 1 and with Methods 1 and 2 or Methods 1 and 3, respectively. Variance Reduction of the Loss Probability ± Standard Error ρ Method 1 Methods 1 & 2 Method 2 Method 1 Method 1 & 3 Method 3 Method ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± Variance Reduction of the Expected Shortfall ± Standard Error ρ Method 1 Methods 1 & 2 Methods 1 & ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± From these results it can be concluded that the best method of importance sampling for the loss probability as well as the expected shortfall is the combination of Methods 1 and 3, because the variance reduction achieved is the greatest compared to the other methods. 20

22 8 Appendix B The moment-method uses the moments to determine the parameters of a distribution. The first step is then to determine the mean, standard deviation and skewness of the data. The first moment is the expectation. The simulated distribution is made of discrete data points, so the expectation can be determined as follows: x = i P(X = x i) x i, where x i are the discrete data points. The standard deviation is determined as follows: s = P(X = x i )(x i x) 2 The skewness is determined as follows: [ (X ) ] x 3 γ = E = s i i P(X = x i)(x i x) 3 i (P(X = x i)(x i x) 2 ) 1.5 The three parameters for the shifted lognormal distribution, Y ln(n(µ, σ, δ)), where µ is the mean, σ is the standard deviation, and δ is the x-axis shift, are determined by solving for µ, σ and δ in the following equations: µ+ σ2 x = δ + e 2 ( ) s 2 = e σ2 1 e 2µ+σ2 ( ) γ = e σ2 + 2 e σ2 1 Solving these equations for the required variables gives: σ = log( 1 2 (8 + 4γ γ 2 + γ 4 ) (8 + 4γ γ 2 + γ 4 ) 1 3 µ = 1 2 σ2 + 1 ( ) s 2 2 ln e σ2 1 δ = x e (µ+ 1 2 σ2 ) 1) With these parameters, the fitted distributions are plotted as seen in Section

23 9 Appendix C1 These are the variance reductions generated by changing the default probability p i of the portfolio. p i = 0.001, c i = 1 Loss Probability Expected Shortfall ρ Method 1 & 2 Method 1 & 3 Method 1 & 2 Method 1 & ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± p i = 0.002, c i = 1 Loss Probability Expected Shortfall ρ Method 1 & 2 Method 1 & 3 Method 1 & 2 Method 1 & ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± p i = 0.004, c i = 1 Loss Probability Expected Shortfall ρ Method 1 & 2 Method 1 & 3 Method 1 & 2 Method 1 & ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± As seen in the tables, reducing the default probability of the portfolio increases the variance reduction that can be achieved for both the loss porbability and the expected shortfall, whereas increasing the default probability decreases the variance reduction. 22

24 10 Appendix C2 These are the variance reductions generated by changing the loan size c i of the portfolio. p i = 0.002, c i = 0.5 Loss Probability Expected Shortfall ρ Method 1 & 2 Method 1 & 3 Method 1 & 2 Method 1 & ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± p i = 0.002, c i = 1 Loss Probability Expected Shortfall ρ Method 1 & 2 Method 1 & 3 Method 1 & 2 Method 1 & ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± p i = 0.002, c i = 2 Loss Probability Expected Shortfall ρ Method 1 & 2 Method 1 & 3 Method 1 & 2 Method 1 & ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± On the whole, decreasing the size of the loan of the homogeneous portfolio increases the variance reduction that can be achieved in comparison to the case of c i = 1. Conversely, increasing the size of the loan decreases the achievable variance reduction. 23

25 References [Glasserman, 1999] Glasserman, Paul, Heidelberger, Philip and Shahabuddin, Perwez, Asymptotically optimal importance sampling and stratification for pricing path-dependent options. Mathematical Finance, vol. 9, p , [Glasserman, 2004] Glasserman, Paul, Monte Carlo in Financial Engineering, first edition. New York, Springer-Verlag, [Glasserman, 2005] Glasserman, Paul and Li, Jinyi, Importance Sampling for Portfolio Credit Risk. Management Science, vol. 51, no. 11, November [Hull] Hull, John C., Risk Management and Financial Institutions, second edition. Pearson Education,

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS Answer any FOUR of the SIX questions.

More information

Strategies for Improving the Efficiency of Monte-Carlo Methods

Strategies for Improving the Efficiency of Monte-Carlo Methods Strategies for Improving the Efficiency of Monte-Carlo Methods Paul J. Atzberger General comments or corrections should be sent to: paulatz@cims.nyu.edu Introduction The Monte-Carlo method is a useful

More information

Simulation Wrap-up, Statistics COS 323

Simulation Wrap-up, Statistics COS 323 Simulation Wrap-up, Statistics COS 323 Today Simulation Re-cap Statistics Variance and confidence intervals for simulations Simulation wrap-up FYI: No class or office hours Thursday Simulation wrap-up

More information

IEOR E4703: Monte-Carlo Simulation

IEOR E4703: Monte-Carlo Simulation IEOR E4703: Monte-Carlo Simulation Simulation Efficiency and an Introduction to Variance Reduction Methods Martin Haugh Department of Industrial Engineering and Operations Research Columbia University

More information

Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ.

Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ. 9 Point estimation 9.1 Rationale behind point estimation When sampling from a population described by a pdf f(x θ) or probability function P [X = x θ] knowledge of θ gives knowledge of the entire population.

More information

ELEMENTS OF MONTE CARLO SIMULATION

ELEMENTS OF MONTE CARLO SIMULATION APPENDIX B ELEMENTS OF MONTE CARLO SIMULATION B. GENERAL CONCEPT The basic idea of Monte Carlo simulation is to create a series of experimental samples using a random number sequence. According to the

More information

1 Rare event simulation and importance sampling

1 Rare event simulation and importance sampling Copyright c 2007 by Karl Sigman 1 Rare event simulation and importance sampling Suppose we wish to use Monte Carlo simulation to estimate a probability p = P (A) when the event A is rare (e.g., when p

More information

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -5 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -5 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc. INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY Lecture -5 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc. Summary of the previous lecture Moments of a distribubon Measures of

More information

Lecture 10: Point Estimation

Lecture 10: Point Estimation Lecture 10: Point Estimation MSU-STT-351-Sum-17B (P. Vellaisamy: MSU-STT-351-Sum-17B) Probability & Statistics for Engineers 1 / 31 Basic Concepts of Point Estimation A point estimate of a parameter θ,

More information

IEOR E4703: Monte-Carlo Simulation

IEOR E4703: Monte-Carlo Simulation IEOR E4703: Monte-Carlo Simulation Further Variance Reduction Methods Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Outline

More information

Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals

Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg :

More information

Point Estimation. Some General Concepts of Point Estimation. Example. Estimator quality

Point Estimation. Some General Concepts of Point Estimation. Example. Estimator quality Point Estimation Some General Concepts of Point Estimation Statistical inference = conclusions about parameters Parameters == population characteristics A point estimate of a parameter is a value (based

More information

Bias Reduction Using the Bootstrap

Bias Reduction Using the Bootstrap Bias Reduction Using the Bootstrap Find f t (i.e., t) so that or E(f t (P, P n ) P) = 0 E(T(P n ) θ(p) + t P) = 0. Change the problem to the sample: whose solution is so the bias-reduced estimate is E(T(P

More information

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage 6 Point Estimation Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Point Estimation Statistical inference: directed toward conclusions about one or more parameters. We will use the generic

More information

Dependence Modeling and Credit Risk

Dependence Modeling and Credit Risk Dependence Modeling and Credit Risk Paola Mosconi Banca IMI Bocconi University, 20/04/2015 Paola Mosconi Lecture 6 1 / 53 Disclaimer The opinion expressed here are solely those of the author and do not

More information

Risk Measurement in Credit Portfolio Models

Risk Measurement in Credit Portfolio Models 9 th DGVFM Scientific Day 30 April 2010 1 Risk Measurement in Credit Portfolio Models 9 th DGVFM Scientific Day 30 April 2010 9 th DGVFM Scientific Day 30 April 2010 2 Quantitative Risk Management Profit

More information

Version A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise.

Version A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise. Math 224 Q Exam 3A Fall 217 Tues Dec 12 Version A Problem 1. Let X be the continuous random variable defined by the following pdf: { 1 x/2 when x 2, f(x) otherwise. (a) Compute the mean µ E[X]. E[X] x

More information

Point Estimators. STATISTICS Lecture no. 10. Department of Econometrics FEM UO Brno office 69a, tel

Point Estimators. STATISTICS Lecture no. 10. Department of Econometrics FEM UO Brno office 69a, tel STATISTICS Lecture no. 10 Department of Econometrics FEM UO Brno office 69a, tel. 973 442029 email:jiri.neubauer@unob.cz 8. 12. 2009 Introduction Suppose that we manufacture lightbulbs and we want to state

More information

Posterior Inference. , where should we start? Consider the following computational procedure: 1. draw samples. 2. convert. 3. compute properties

Posterior Inference. , where should we start? Consider the following computational procedure: 1. draw samples. 2. convert. 3. compute properties Posterior Inference Example. Consider a binomial model where we have a posterior distribution for the probability term, θ. Suppose we want to make inferences about the log-odds γ = log ( θ 1 θ), where

More information

Homework Assignments

Homework Assignments Homework Assignments Week 1 (p. 57) #4.1, 4., 4.3 Week (pp 58 6) #4.5, 4.6, 4.8(a), 4.13, 4.0, 4.6(b), 4.8, 4.31, 4.34 Week 3 (pp 15 19) #1.9, 1.1, 1.13, 1.15, 1.18 (pp 9 31) #.,.6,.9 Week 4 (pp 36 37)

More information

Ch4. Variance Reduction Techniques

Ch4. Variance Reduction Techniques Ch4. Zhang Jin-Ting Department of Statistics and Applied Probability July 17, 2012 Ch4. Outline Ch4. This chapter aims to improve the Monte Carlo Integration estimator via reducing its variance using some

More information

Chapter 4: Asymptotic Properties of MLE (Part 3)

Chapter 4: Asymptotic Properties of MLE (Part 3) Chapter 4: Asymptotic Properties of MLE (Part 3) Daniel O. Scharfstein 09/30/13 1 / 1 Breakdown of Assumptions Non-Existence of the MLE Multiple Solutions to Maximization Problem Multiple Solutions to

More information

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek HE DISRIBUION OF LOAN PORFOLIO VALUE * Oldrich Alfons Vasicek he amount of capital necessary to support a portfolio of debt securities depends on the probability distribution of the portfolio loss. Consider

More information

2 f. f t S 2. Delta measures the sensitivityof the portfolio value to changes in the price of the underlying

2 f. f t S 2. Delta measures the sensitivityof the portfolio value to changes in the price of the underlying Sensitivity analysis Simulating the Greeks Meet the Greeks he value of a derivative on a single underlying asset depends upon the current asset price S and its volatility Σ, the risk-free interest rate

More information

Gamma. The finite-difference formula for gamma is

Gamma. The finite-difference formula for gamma is Gamma The finite-difference formula for gamma is [ P (S + ɛ) 2 P (S) + P (S ɛ) e rτ E ɛ 2 ]. For a correlation option with multiple underlying assets, the finite-difference formula for the cross gammas

More information

Value at Risk Ch.12. PAK Study Manual

Value at Risk Ch.12. PAK Study Manual Value at Risk Ch.12 Related Learning Objectives 3a) Apply and construct risk metrics to quantify major types of risk exposure such as market risk, credit risk, liquidity risk, regulatory risk etc., and

More information

Chapter 2 Uncertainty Analysis and Sampling Techniques

Chapter 2 Uncertainty Analysis and Sampling Techniques Chapter 2 Uncertainty Analysis and Sampling Techniques The probabilistic or stochastic modeling (Fig. 2.) iterative loop in the stochastic optimization procedure (Fig..4 in Chap. ) involves:. Specifying

More information

EVA Tutorial #1 BLOCK MAXIMA APPROACH IN HYDROLOGIC/CLIMATE APPLICATIONS. Rick Katz

EVA Tutorial #1 BLOCK MAXIMA APPROACH IN HYDROLOGIC/CLIMATE APPLICATIONS. Rick Katz 1 EVA Tutorial #1 BLOCK MAXIMA APPROACH IN HYDROLOGIC/CLIMATE APPLICATIONS Rick Katz Institute for Mathematics Applied to Geosciences National Center for Atmospheric Research Boulder, CO USA email: rwk@ucar.edu

More information

A Hybrid Importance Sampling Algorithm for VaR

A Hybrid Importance Sampling Algorithm for VaR A Hybrid Importance Sampling Algorithm for VaR No Author Given No Institute Given Abstract. Value at Risk (VaR) provides a number that measures the risk of a financial portfolio under significant loss.

More information

Operational Risk Aggregation

Operational Risk Aggregation Operational Risk Aggregation Professor Carol Alexander Chair of Risk Management and Director of Research, ISMA Centre, University of Reading, UK. Loss model approaches are currently a focus of operational

More information

The Use of Importance Sampling to Speed Up Stochastic Volatility Simulations

The Use of Importance Sampling to Speed Up Stochastic Volatility Simulations The Use of Importance Sampling to Speed Up Stochastic Volatility Simulations Stan Stilger June 6, 1 Fouque and Tullie use importance sampling for variance reduction in stochastic volatility simulations.

More information

Statistics for Business and Economics

Statistics for Business and Economics Statistics for Business and Economics Chapter 5 Continuous Random Variables and Probability Distributions Ch. 5-1 Probability Distributions Probability Distributions Ch. 4 Discrete Continuous Ch. 5 Probability

More information

Continuous random variables

Continuous random variables Continuous random variables probability density function (f(x)) the probability distribution function of a continuous random variable (analogous to the probability mass function for a discrete random variable),

More information

Financial Risk Management

Financial Risk Management Financial Risk Management Professor: Thierry Roncalli Evry University Assistant: Enareta Kurtbegu Evry University Tutorial exercices #4 1 Correlation and copulas 1. The bivariate Gaussian copula is given

More information

2.1 Mathematical Basis: Risk-Neutral Pricing

2.1 Mathematical Basis: Risk-Neutral Pricing Chapter Monte-Carlo Simulation.1 Mathematical Basis: Risk-Neutral Pricing Suppose that F T is the payoff at T for a European-type derivative f. Then the price at times t before T is given by f t = e r(t

More information

Financial Risk Forecasting Chapter 9 Extreme Value Theory

Financial Risk Forecasting Chapter 9 Extreme Value Theory Financial Risk Forecasting Chapter 9 Extreme Value Theory Jon Danielsson 2017 London School of Economics To accompany Financial Risk Forecasting www.financialriskforecasting.com Published by Wiley 2011

More information

Probability. An intro for calculus students P= Figure 1: A normal integral

Probability. An intro for calculus students P= Figure 1: A normal integral Probability An intro for calculus students.8.6.4.2 P=.87 2 3 4 Figure : A normal integral Suppose we flip a coin 2 times; what is the probability that we get more than 2 heads? Suppose we roll a six-sided

More information

P VaR0.01 (X) > 2 VaR 0.01 (X). (10 p) Problem 4

P VaR0.01 (X) > 2 VaR 0.01 (X). (10 p) Problem 4 KTH Mathematics Examination in SF2980 Risk Management, December 13, 2012, 8:00 13:00. Examiner : Filip indskog, tel. 790 7217, e-mail: lindskog@kth.se Allowed technical aids and literature : a calculator,

More information

MTH6154 Financial Mathematics I Stochastic Interest Rates

MTH6154 Financial Mathematics I Stochastic Interest Rates MTH6154 Financial Mathematics I Stochastic Interest Rates Contents 4 Stochastic Interest Rates 45 4.1 Fixed Interest Rate Model............................ 45 4.2 Varying Interest Rate Model...........................

More information

Operational Risk Aggregation

Operational Risk Aggregation Operational Risk Aggregation Professor Carol Alexander Chair of Risk Management and Director of Research, ISMA Centre, University of Reading, UK. Loss model approaches are currently a focus of operational

More information

Random Variables and Probability Distributions

Random Variables and Probability Distributions Chapter 3 Random Variables and Probability Distributions Chapter Three Random Variables and Probability Distributions 3. Introduction An event is defined as the possible outcome of an experiment. In engineering

More information

Much of what appears here comes from ideas presented in the book:

Much of what appears here comes from ideas presented in the book: Chapter 11 Robust statistical methods Much of what appears here comes from ideas presented in the book: Huber, Peter J. (1981), Robust statistics, John Wiley & Sons (New York; Chichester). There are many

More information

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29 Chapter 5 Univariate time-series analysis () Chapter 5 Univariate time-series analysis 1 / 29 Time-Series Time-series is a sequence fx 1, x 2,..., x T g or fx t g, t = 1,..., T, where t is an index denoting

More information

IEOR E4602: Quantitative Risk Management

IEOR E4602: Quantitative Risk Management IEOR E4602: Quantitative Risk Management Basic Concepts and Techniques of Risk Management Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com

More information

TEST OF BOUNDED LOG-NORMAL PROCESS FOR OPTIONS PRICING

TEST OF BOUNDED LOG-NORMAL PROCESS FOR OPTIONS PRICING TEST OF BOUNDED LOG-NORMAL PROCESS FOR OPTIONS PRICING Semih Yön 1, Cafer Erhan Bozdağ 2 1,2 Department of Industrial Engineering, Istanbul Technical University, Macka Besiktas, 34367 Turkey Abstract.

More information

Comparing the Means of. Two Log-Normal Distributions: A Likelihood Approach

Comparing the Means of. Two Log-Normal Distributions: A Likelihood Approach Journal of Statistical and Econometric Methods, vol.3, no.1, 014, 137-15 ISSN: 179-660 (print), 179-6939 (online) Scienpress Ltd, 014 Comparing the Means of Two Log-Normal Distributions: A Likelihood Approach

More information

Introduction to Algorithmic Trading Strategies Lecture 8

Introduction to Algorithmic Trading Strategies Lecture 8 Introduction to Algorithmic Trading Strategies Lecture 8 Risk Management Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com Outline Value at Risk (VaR) Extreme Value Theory (EVT) References

More information

Bivariate Birnbaum-Saunders Distribution

Bivariate Birnbaum-Saunders Distribution Department of Mathematics & Statistics Indian Institute of Technology Kanpur January 2nd. 2013 Outline 1 Collaborators 2 3 Birnbaum-Saunders Distribution: Introduction & Properties 4 5 Outline 1 Collaborators

More information

Characterization of the Optimum

Characterization of the Optimum ECO 317 Economics of Uncertainty Fall Term 2009 Notes for lectures 5. Portfolio Allocation with One Riskless, One Risky Asset Characterization of the Optimum Consider a risk-averse, expected-utility-maximizing

More information

Exam M Fall 2005 PRELIMINARY ANSWER KEY

Exam M Fall 2005 PRELIMINARY ANSWER KEY Exam M Fall 005 PRELIMINARY ANSWER KEY Question # Answer Question # Answer 1 C 1 E C B 3 C 3 E 4 D 4 E 5 C 5 C 6 B 6 E 7 A 7 E 8 D 8 D 9 B 9 A 10 A 30 D 11 A 31 A 1 A 3 A 13 D 33 B 14 C 34 C 15 A 35 A

More information

Unit 5: Sampling Distributions of Statistics

Unit 5: Sampling Distributions of Statistics Unit 5: Sampling Distributions of Statistics Statistics 571: Statistical Methods Ramón V. León 6/12/2004 Unit 5 - Stat 571 - Ramon V. Leon 1 Definitions and Key Concepts A sample statistic used to estimate

More information

Asymptotic methods in risk management. Advances in Financial Mathematics

Asymptotic methods in risk management. Advances in Financial Mathematics Asymptotic methods in risk management Peter Tankov Based on joint work with A. Gulisashvili Advances in Financial Mathematics Paris, January 7 10, 2014 Peter Tankov (Université Paris Diderot) Asymptotic

More information

UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions.

UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions. UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions. Random Variables 2 A random variable X is a numerical (integer, real, complex, vector etc.) summary of the outcome of the random experiment.

More information

Unit 5: Sampling Distributions of Statistics

Unit 5: Sampling Distributions of Statistics Unit 5: Sampling Distributions of Statistics Statistics 571: Statistical Methods Ramón V. León 6/12/2004 Unit 5 - Stat 571 - Ramon V. Leon 1 Definitions and Key Concepts A sample statistic used to estimate

More information

Lecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ.

Lecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ. Sufficient Statistics Lecture Notes 6 Sufficiency Data reduction in terms of a particular statistic can be thought of as a partition of the sample space X. Definition T is sufficient for θ if the conditional

More information

Random variables. Contents

Random variables. Contents Random variables Contents 1 Random Variable 2 1.1 Discrete Random Variable............................ 3 1.2 Continuous Random Variable........................... 5 1.3 Measures of Location...............................

More information

Section 8.2: Monte Carlo Estimation

Section 8.2: Monte Carlo Estimation Section 8.2: Monte Carlo Estimation Discrete-Event Simulation: A First Course c 2006 Pearson Ed., Inc. 0-13-142917-5 Discrete-Event Simulation: A First Course Section 8.2: Monte Carlo Estimation 1/ 19

More information

Chapter 8: Sampling distributions of estimators Sections

Chapter 8: Sampling distributions of estimators Sections Chapter 8 continued Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample

More information

Analysis of truncated data with application to the operational risk estimation

Analysis of truncated data with application to the operational risk estimation Analysis of truncated data with application to the operational risk estimation Petr Volf 1 Abstract. Researchers interested in the estimation of operational risk often face problems arising from the structure

More information

Course information FN3142 Quantitative finance

Course information FN3142 Quantitative finance Course information 015 16 FN314 Quantitative finance This course is aimed at students interested in obtaining a thorough grounding in market finance and related empirical methods. Prerequisite If taken

More information

Random Variables Handout. Xavier Vilà

Random Variables Handout. Xavier Vilà Random Variables Handout Xavier Vilà Course 2004-2005 1 Discrete Random Variables. 1.1 Introduction 1.1.1 Definition of Random Variable A random variable X is a function that maps each possible outcome

More information

Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty

Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty George Photiou Lincoln College University of Oxford A dissertation submitted in partial fulfilment for

More information

Stratified Sampling in Monte Carlo Simulation: Motivation, Design, and Sampling Error

Stratified Sampling in Monte Carlo Simulation: Motivation, Design, and Sampling Error South Texas Project Risk- Informed GSI- 191 Evaluation Stratified Sampling in Monte Carlo Simulation: Motivation, Design, and Sampling Error Document: STP- RIGSI191- ARAI.03 Revision: 1 Date: September

More information

1. You are given the following information about a stationary AR(2) model:

1. You are given the following information about a stationary AR(2) model: Fall 2003 Society of Actuaries **BEGINNING OF EXAMINATION** 1. You are given the following information about a stationary AR(2) model: (i) ρ 1 = 05. (ii) ρ 2 = 01. Determine φ 2. (A) 0.2 (B) 0.1 (C) 0.4

More information

2 Control variates. λe λti λe e λt i where R(t) = t Y 1 Y N(t) is the time from the last event to t. L t = e λr(t) e e λt(t) Exercises

2 Control variates. λe λti λe e λt i where R(t) = t Y 1 Y N(t) is the time from the last event to t. L t = e λr(t) e e λt(t) Exercises 96 ChapterVI. Variance Reduction Methods stochastic volatility ISExSoren5.9 Example.5 (compound poisson processes) Let X(t) = Y + + Y N(t) where {N(t)},Y, Y,... are independent, {N(t)} is Poisson(λ) with

More information

The stochastic calculus

The stochastic calculus Gdansk A schedule of the lecture Stochastic differential equations Ito calculus, Ito process Ornstein - Uhlenbeck (OU) process Heston model Stopping time for OU process Stochastic differential equations

More information

To Measure Concentration Risk - A comparative study

To Measure Concentration Risk - A comparative study To Measure Concentration Risk - A comparative study Alma Broström and Hanna Scheibenpflug Department of Mathematical Statistics Faculty of Engineering at Lund University May 2017 Abstract Credit risk

More information

Week 1 Quantitative Analysis of Financial Markets Distributions B

Week 1 Quantitative Analysis of Financial Markets Distributions B Week 1 Quantitative Analysis of Financial Markets Distributions B Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 October

More information

FREDRIK BAJERS VEJ 7 G 9220 AALBORG ØST Tlf.: URL: Fax: Monte Carlo methods

FREDRIK BAJERS VEJ 7 G 9220 AALBORG ØST Tlf.: URL:   Fax: Monte Carlo methods INSTITUT FOR MATEMATISKE FAG AALBORG UNIVERSITET FREDRIK BAJERS VEJ 7 G 9220 AALBORG ØST Tlf.: 96 35 88 63 URL: www.math.auc.dk Fax: 98 15 81 29 E-mail: jm@math.aau.dk Monte Carlo methods Monte Carlo methods

More information

The data-driven COS method

The data-driven COS method The data-driven COS method Á. Leitao, C. W. Oosterlee, L. Ortiz-Gracia and S. M. Bohte Delft University of Technology - Centrum Wiskunde & Informatica Reading group, March 13, 2017 Reading group, March

More information

AMH4 - ADVANCED OPTION PRICING. Contents

AMH4 - ADVANCED OPTION PRICING. Contents AMH4 - ADVANCED OPTION PRICING ANDREW TULLOCH Contents 1. Theory of Option Pricing 2 2. Black-Scholes PDE Method 4 3. Martingale method 4 4. Monte Carlo methods 5 4.1. Method of antithetic variances 5

More information

Practice Exercises for Midterm Exam ST Statistical Theory - II The ACTUAL exam will consists of less number of problems.

Practice Exercises for Midterm Exam ST Statistical Theory - II The ACTUAL exam will consists of less number of problems. Practice Exercises for Midterm Exam ST 522 - Statistical Theory - II The ACTUAL exam will consists of less number of problems. 1. Suppose X i F ( ) for i = 1,..., n, where F ( ) is a strictly increasing

More information

Chapter 3 Common Families of Distributions. Definition 3.4.1: A family of pmfs or pdfs is called exponential family if it can be expressed as

Chapter 3 Common Families of Distributions. Definition 3.4.1: A family of pmfs or pdfs is called exponential family if it can be expressed as Lecture 0 on BST 63: Statistical Theory I Kui Zhang, 09/9/008 Review for the previous lecture Definition: Several continuous distributions, including uniform, gamma, normal, Beta, Cauchy, double exponential

More information

درس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی

درس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی یادگیري ماشین توزیع هاي نمونه و تخمین نقطه اي پارامترها Sampling Distributions and Point Estimation of Parameter (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی درس هفتم 1 Outline Introduction

More information

Rohini Kumar. Statistics and Applied Probability, UCSB (Joint work with J. Feng and J.-P. Fouque)

Rohini Kumar. Statistics and Applied Probability, UCSB (Joint work with J. Feng and J.-P. Fouque) Small time asymptotics for fast mean-reverting stochastic volatility models Statistics and Applied Probability, UCSB (Joint work with J. Feng and J.-P. Fouque) March 11, 2011 Frontier Probability Days,

More information

IEOR E4602: Quantitative Risk Management

IEOR E4602: Quantitative Risk Management IEOR E4602: Quantitative Risk Management Risk Measures Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Reference: Chapter 8

More information

Chapter 7: Point Estimation and Sampling Distributions

Chapter 7: Point Estimation and Sampling Distributions Chapter 7: Point Estimation and Sampling Distributions Seungchul Baek Department of Statistics, University of South Carolina STAT 509: Statistics for Engineers 1 / 20 Motivation In chapter 3, we learned

More information

Using Monte Carlo Integration and Control Variates to Estimate π

Using Monte Carlo Integration and Control Variates to Estimate π Using Monte Carlo Integration and Control Variates to Estimate π N. Cannady, P. Faciane, D. Miksa LSU July 9, 2009 Abstract We will demonstrate the utility of Monte Carlo integration by using this algorithm

More information

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMSN50)

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMSN50) Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMSN50) Magnus Wiktorsson Centre for Mathematical Sciences Lund University, Sweden Lecture 5 Sequential Monte Carlo methods I January

More information

3.4 Copula approach for modeling default dependency. Two aspects of modeling the default times of several obligors

3.4 Copula approach for modeling default dependency. Two aspects of modeling the default times of several obligors 3.4 Copula approach for modeling default dependency Two aspects of modeling the default times of several obligors 1. Default dynamics of a single obligor. 2. Model the dependence structure of defaults

More information

Chapter 5. Continuous Random Variables and Probability Distributions. 5.1 Continuous Random Variables

Chapter 5. Continuous Random Variables and Probability Distributions. 5.1 Continuous Random Variables Chapter 5 Continuous Random Variables and Probability Distributions 5.1 Continuous Random Variables 1 2CHAPTER 5. CONTINUOUS RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS Probability Distributions Probability

More information

ECE 295: Lecture 03 Estimation and Confidence Interval

ECE 295: Lecture 03 Estimation and Confidence Interval ECE 295: Lecture 03 Estimation and Confidence Interval Spring 2018 Prof Stanley Chan School of Electrical and Computer Engineering Purdue University 1 / 23 Theme of this Lecture What is Estimation? You

More information

Business Statistics 41000: Probability 3

Business Statistics 41000: Probability 3 Business Statistics 41000: Probability 3 Drew D. Creal University of Chicago, Booth School of Business February 7 and 8, 2014 1 Class information Drew D. Creal Email: dcreal@chicagobooth.edu Office: 404

More information

Slides for Risk Management

Slides for Risk Management Slides for Risk Management Introduction to the modeling of assets Groll Seminar für Finanzökonometrie Prof. Mittnik, PhD Groll (Seminar für Finanzökonometrie) Slides for Risk Management Prof. Mittnik,

More information

"Pricing Exotic Options using Strong Convergence Properties

Pricing Exotic Options using Strong Convergence Properties Fourth Oxford / Princeton Workshop on Financial Mathematics "Pricing Exotic Options using Strong Convergence Properties Klaus E. Schmitz Abe schmitz@maths.ox.ac.uk www.maths.ox.ac.uk/~schmitz Prof. Mike

More information

10. Monte Carlo Methods

10. Monte Carlo Methods 10. Monte Carlo Methods 1. Introduction. Monte Carlo simulation is an important tool in computational finance. It may be used to evaluate portfolio management rules, to price options, to simulate hedging

More information

On Complexity of Multistage Stochastic Programs

On Complexity of Multistage Stochastic Programs On Complexity of Multistage Stochastic Programs Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA e-mail: ashapiro@isye.gatech.edu

More information

A New Hybrid Estimation Method for the Generalized Pareto Distribution

A New Hybrid Estimation Method for the Generalized Pareto Distribution A New Hybrid Estimation Method for the Generalized Pareto Distribution Chunlin Wang Department of Mathematics and Statistics University of Calgary May 18, 2011 A New Hybrid Estimation Method for the GPD

More information

Rapid computation of prices and deltas of nth to default swaps in the Li Model

Rapid computation of prices and deltas of nth to default swaps in the Li Model Rapid computation of prices and deltas of nth to default swaps in the Li Model Mark Joshi, Dherminder Kainth QUARC RBS Group Risk Management Summary Basic description of an nth to default swap Introduction

More information

Computer Exercise 2 Simulation

Computer Exercise 2 Simulation Lund University with Lund Institute of Technology Valuation of Derivative Assets Centre for Mathematical Sciences, Mathematical Statistics Fall 2017 Computer Exercise 2 Simulation This lab deals with pricing

More information

Statistics for Business and Economics

Statistics for Business and Economics Statistics for Business and Economics Chapter 7 Estimation: Single Population Copyright 010 Pearson Education, Inc. Publishing as Prentice Hall Ch. 7-1 Confidence Intervals Contents of this chapter: Confidence

More information

Estimation of a parametric function associated with the lognormal distribution 1

Estimation of a parametric function associated with the lognormal distribution 1 Communications in Statistics Theory and Methods Estimation of a parametric function associated with the lognormal distribution Jiangtao Gou a,b and Ajit C. Tamhane c, a Department of Mathematics and Statistics,

More information

MAS3904/MAS8904 Stochastic Financial Modelling

MAS3904/MAS8904 Stochastic Financial Modelling MAS3904/MAS8904 Stochastic Financial Modelling Dr Andrew (Andy) Golightly a.golightly@ncl.ac.uk Semester 1, 2018/19 Administrative Arrangements Lectures on Tuesdays at 14:00 (PERCY G13) and Thursdays at

More information

Lecture 5: Fundamentals of Statistical Analysis and Distributions Derived from Normal Distributions

Lecture 5: Fundamentals of Statistical Analysis and Distributions Derived from Normal Distributions Lecture 5: Fundamentals of Statistical Analysis and Distributions Derived from Normal Distributions ELE 525: Random Processes in Information Systems Hisashi Kobayashi Department of Electrical Engineering

More information

Estimating the Greeks

Estimating the Greeks IEOR E4703: Monte-Carlo Simulation Columbia University Estimating the Greeks c 207 by Martin Haugh In these lecture notes we discuss the use of Monte-Carlo simulation for the estimation of sensitivities

More information

Importance Sampling for Estimating Risk Measures in Portfolio Credit Risk Models

Importance Sampling for Estimating Risk Measures in Portfolio Credit Risk Models Importance Sampling for Estimating Risk Measures in Portfolio Credit Risk Models Zhao Li October 2009 Abstract This paper is the report of a Master s Degree project carried out at Royal Institute of Technology

More information

Chapter 5. Sampling Distributions

Chapter 5. Sampling Distributions Lecture notes, Lang Wu, UBC 1 Chapter 5. Sampling Distributions 5.1. Introduction In statistical inference, we attempt to estimate an unknown population characteristic, such as the population mean, µ,

More information

Modelling the Sharpe ratio for investment strategies

Modelling the Sharpe ratio for investment strategies Modelling the Sharpe ratio for investment strategies Group 6 Sako Arts 0776148 Rik Coenders 0777004 Stefan Luijten 0783116 Ivo van Heck 0775551 Rik Hagelaars 0789883 Stephan van Driel 0858182 Ellen Cardinaels

More information

The Vasicek Distribution

The Vasicek Distribution The Vasicek Distribution Dirk Tasche Lloyds TSB Bank Corporate Markets Rating Systems dirk.tasche@gmx.net Bristol / London, August 2008 The opinions expressed in this presentation are those of the author

More information

IEOR E4703: Monte-Carlo Simulation

IEOR E4703: Monte-Carlo Simulation IEOR E4703: Monte-Carlo Simulation Generating Random Variables and Stochastic Processes Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com

More information