Backtesting expected shortfall: A quantitative evaluation

Size: px
Start display at page:

Download "Backtesting expected shortfall: A quantitative evaluation"

Transcription

1 DEGREE PROJECT IN MATHEMATICS, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2016 Backtesting expected shortfall: A quantitative evaluation JOHAN ENGVALL KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ENGINEERING SCIENCES

2

3 Backtesting expected shortfall: A quantitative evaluation J OHAN ENGVALL Master s Thesis in Financial Mathematics (30 ECTS credits) Master Programme in Mathematics (120 credits) Royal Institute of Technology year 2016 Supervisor at KTH: Boualem Djehiche Examiner: Boualem Djehiche TRITA-MAT-E 2016:57 ISRN-KTH/MAT/E--16/57--SE Royal Institute of Technology School of Engineering Sciences KTH SCI SE Stockholm, Sweden URL:

4

5 Abstract How to measure risk is an important question in finance and much work has been done on how to quantitatively measure risk. An important part of this measurement is evaluating the measurements against the outcomes a procedure known as backtesting. A common risk measure is Expected shortfall for which how to backtest has been debated. In this thesis we will compare four different proposed backtests and see how they perform in a realistic setting. The main finding in this thesis is that it is possible to find backtests that perform well but it is important to investigate them thoroughly as small errors in the model can lead to large errors in the outcome of the backtest.

6

7 Backtesting av expected shortfall: en kvantitativ studie Sammanfattning Hur man mäter risk är en viktig fråga inom den finansiella industrin och det finns mycket skrivet om hur man kvantifierar finansiell risk. En viktig del i att mäta risk är att i efterhand kontrollera så att modellerna har gett rimliga estimeringar av risken denna procedur brukar kallas backtesting. Ett vanligt mått på risk är Expected shortfall där hur detta ska göras har debatterats. Vi presenterar fyra olika metoder att utföra detta och se hur dessa presterar i en verklighetstrogen situation. Det vi kommer fram till är att det är möjligt att hitta metoder som fungerar väl men att det är viktigt att testa dessa noga eftersom små fel i metoderna kan ge stora fel i resultatet.

8

9 Acknowledgements I would like to thank my supervisor Boualem Djehiche for valuable input when writing this thesis. Stockholm, December 2016 Johan Engvall

10

11 Contents 1 Introduction 1 2 Background Risk measures Forecast evaluation Likelihood ratio tests Stochastic differential equations Backtesting risk measures Backtesting Value-at-Risk Conclusion Theoretical framework Backtesting expected shortfall Backtest Backtest Backtest Backtest Conclusion Data Returns VaR ES Summary Results Introduction to the results Geometric Brownian Motion Student s t Heston s stochastic volatility model Summary i

12 ii 6 Discussion Comparison between the methods Further research References 44 Appendices 45 A Figures 47 A.1 Student s t A.2 Heston s stochastic volatility model

13 iii Abbreviations CDF Cumulative distribution function ES Expected shortfall GBM Geometric Brownian motion LR Likelihood ratio SDE Stochastic differential equation VaR Value-at-Risk Notation S(x, y) Scoring function L t X t VaR t,α ES t,α Loss at time t Net value at time t Value-at-Risk at time t with confidence level α Expected shortfall at time t with confidence level α x The integer part of x F X (u) Cumulative distribution function of the stochastic variable X evaluated at u Φ(x) Cumulative distribution function of the standard normal distribution evaluated at x t ν (x) Cumulative distribution function of the Student s t-distribution with ν degrees of freedom evaluated at x S t The price of an asset at time t W, W 1, W 2 Brownian motion

14 iv

15 Chapter 1 Introduction With the increased complexity of the financial markets quantifying risk has become more important. Supervisors of the financial markets want to make sure that financial companies have adequate capital to cover the risks they are exposed to. Banks also want to internally keep track of their risks to ensure that the risks of individual departments are not too high. This is important in order to avoid a situation that happened to for example Barings bank where large risks accumulated in a small regional office and lead to the collapse of a large international bank. This must of course be complemented by qualitative risk management methods. Currently the most common measure of risk is Value-at-Risk(VaR) which basically is a one-sided confidence interval of the distribution of the expected loss. This is defined for a given confidence level and time period. For example given a one day 5% VaR the probability that the loss exceeds the VaR should be 5%. So a loss exceeding this estimate is an event you expect to occur at least on a monthly basis. The main reason that VaR have become popular is that it is easy to work with and its meaning is easy to understand. There are however some drawbacks with VaR leading to another risk measure being proposed as a replacement. The measure being proposed is Expected Shortfall(ES). ES is more complex, but solves many of the problems associated with VaR. A move to ES as the primary risk measure have for example been proposed by the Basel Committee on Banking Supervision (cf. [4]). An important part in estimating risk is to evaluate how accurate the estimates have been. This procedure is generally called backtesting. It is part of the larger field of forecast evaluation. The aim is to look at a forecast and the corresponding outcome and evaluate if it was accurate or not. What is 1

16 2 CHAPTER 1. INTRODUCTION an accurate estimation of course depends of the type of estimation. If you estimate a mean value you want the distance between the estimation and the outcome to be as small as possible. If you estimate a median you want half the outcomes to be larger and half the outcomes to be smaller than the estimate. For VaR this procedure is quite straightforward but for ES the procedure is much more complex. An estimate for which you cannot even in retrospect measure the accuracy of is of course useless. This thesis aims to investigate how to evaluate methods of backtesting ES. This thesis main contribution will be the findings about the implementation of backtests. Many different backtests have been proposed but the literature regarding actual implementation is scarce. In the implementation of a backtest for ES many assumptions and approximations have to be made. This thesis shows that great care have to be taken when making these decisions. The thesis will first introduce four different backtests. These are to a large extent built on previously presented ideas for backtest but the exact implementation is not clearly defined in all cases. In these cases we discuss different ways they could be implemented in practice. This is especially the case regarding an approximation method proposed by Emmer, Kratz and Tasche (cf. [7]) where no actual methods for the testing were proposed so in this case we present two different methods. The presented methods are then tested with simulated data. They are tested both with data for which the ES is assumed to be correctly modeled and with data where the ES is assumed to be modeled incorrectly. The aim with this is to see how the methods would perform in a realistic scenario. The outline of the thesis will be the following. Chapter 2 will present some general theory and necessary for risk measures and the backtesting of them. In chapter 3 the backtests are presented and discussed. Then in chapter 4 how the data are simulated are presented. In chapter 5 the results will be presented and in chapter 6 the main conclusions are presented as well as some suggestions for further studies.

17 Chapter 2 Background 2.1 Risk measures General properties To understand what VaR and ES are and why they are important we must first think about what risk is and more importantly how to quantify it. Artzner et al. (cf. [3]) describes the following five properties that are desirable for a quantitative risk measure. A risk measure for the asset described by the random variable X denoted ρ(x) should have the following three properties for any random variables X and Y : Normalization ρ(0) = 0. This means that holding no position means you have zero risk. Translation invariance a R ρ(x + a(1 + r f )) = ρ(x) a. This property ensures that adding a fixed amount of cash to a position will decrease the risk with an equal amount. This is important for the interpretation of a risk measure as an amount of capital needed to stay solvent. Monotonicity If X Y then ρ(x) ρ(y ). This property ensures that a portfolio with a lower value will have a higher risk. 3

18 4 CHAPTER 2. BACKGROUND A coherent risk measure also have the following two properties Subadditivity ρ(x + Y ) ρ(x) + ρ(y ). This property means that diversification can never increase the risk only decrease it. Positive homogeneity ρ(ax) = aρ(x) a 0. This property means that the risk will increase proportionally to the size of the position. Doubling the position will double the risk. That a risk measure is coherent is not necessary for its ability to be used quantitatively to measure risk but it gives some important results, mainly that the effect of diversification cannot be negative as it can for a risk measure that is not coherent Value-at-Risk Value-at-Risk(VaR) is defined as one sided confidence interval of the loss. The VaR is given by a confidence level and a time period. A n-day α% VaR is given by VaR α,t (X) = inf{x P (X t < x) α}, where X t is the net value of the asset after n days defined by From this the loss can be defined by X t = S t S t 1. L t = X t. The VaR satisfies the three properties of a risk measure but it does not satisfy the property of subadditivity and it is therefore not a coherent risk measure. This means that diversification can affect the VaR negatively although in practice it is clear that the risk would decrease. Also it does not take into account losses larger than the VaR level. For example the 5% VaR is not affected by the losses less probable than 5%. This means that there could be large but not that probable losses which are not accounted for. From an theoretical point of view this is a large drawback of the VaR, but regardless

19 CHAPTER 2. BACKGROUND 5 it is widely used in practice. If VaR for longer time periods is calculated one should account for the interest rate but as these methods would only be relevant if the VaR is calculated for short time periods it will be disregarded Expected shortfall The Expected shortfall(es) is defined as (cf. [1]) ES α (X) = 1 α α 0 VaR u (X)du, (2.1) which if X has a continuous distribution can also be written as the conditional probability ES α (X) = E[L VaR α (X) L]. (2.2) The main advantage of ES over VaR is that ES satisfies the property of subadditivity and therefore is a coherent risk measure Estimation of VaR and ES Most methods for estimation of risk measures rely on using historical data. There are two main approaches, parametric and non-parametric methods. In a parametric method you estimate a given probability distribution to the data. Common choices are the normal distribution and Student s t distribution. In a non-parametric method you do not assume anything about the distribution of the data but instead resample from the historical data. The reliance on historical data is a problem because the asset experiencing the risk is often not static meaning that older data often becomes irrelevant. But often there are no real alternatives as the distribution of the loss have to be estimated somehow. 2.2 Forecast evaluation An important topic which is the foundation of backtesting is forecast evaluation. Much has been written about how to evaluate forecast. Forecasts can take different form and are evaluated differently. Two important examples of types of forecasts are interval forecasts and point forecasts. Point forecasts forecast a point with some type of property, for example the mean or the median. An interval forecast is a forecast of an

20 6 CHAPTER 2. BACKGROUND interval in which the outcome with some probability will lie within. The interval can be both one-sided and two-sided. Therefore the median can be viewed as both a point forecast and an interval forecast. VaR can therefore be regarded as both a one-sided interval forecast and a point forecast. For a point forecast you have a forecast x and an outcome y. You then want to assess if x is a good forecast for y. To do that you generally use a scoring function denoted S(x, y) which is defined as any mapping S : O A [0, ), where O is the observation domain consisting of all possible outcomes and A is the forecast domain consisting of all possible forecasts. In the more general theory x is often seen as an action and y as an outcome. The forecast domain is therefore often called the action domain hence the use of A. When dealing with forecasts O and A are often assumed to be an identical predicationobservation domain D which is the same as to assume that it is possible to forecast all different possible outcomes. If the predication-observation domain is an interval of real numbers D = I I I R, the following conditions must hold (cf. [9]): S(x, y) 0 x y S(x, y) = 0 if x = y S(x, y) is continuous in x The partial derivate x S(x, y) exists and is continuous in x whenever x y The scoring function can be seen as a metric between the forecast and the outcome and therefore seen as a measure of the error between the prediction and the outcome. Thus a lower score means a more accurate forecast. But there is for example no condition that requires symmetry of the scoring function. To estimate x when the outcome is y can be considerably more wrong than estimating y when the outcome is x. But the requirement of continuity assures that for estimates that are similar the error must be small. Common examples of scoring functions are for example the squared error and the absolute error S(x, y) = (x y) 2, S(x, y) = x y.

21 CHAPTER 2. BACKGROUND Elicitability An important property in the area of point forecast evaluation is elicitability. Elicitability is a concept originally introduced by Osband (cf. [14]) and formalized by Lambert, Pennock and Shoham (cf. [12]). A functional φ(y ) of a random variable Y is elicitable if the following hold φ(y ) = argmin x E[S(x, Y )], for some S. What this means is that a functional can be seen as minimization of a given scoring function. An example of this is the least squares method where the mean is given by the minimization of the scoring function S(x, y) = (x y) 2. Gneiting (cf. [9]) shows that VaR is elicitable and he also shows that ES is not elicitable. The scoring function that elicits VaR is S(x, y) = (1 y x α)(x y). (2.3) We see that for a low α the scoring function will be considerably higher if x > y than the other way around. ES is not elicitable but Emmer, Kratz and Tasche (cf. [7]) shows that ES is conditionally elicitable. The conditional elicitability of ES comes from the fact that VaR is an elicitable functional and ES defined conditional of the VaR as in (2.2) means taking a subset of the outcomes. For this subset the ES is then just the expected value which is elicitable Hypothesis testing A statistical hypothesis test is performed by taking two hypotheses, one null hypothesis which is assumed to be true and an alternative hypothesis which one aims to test against. For a given set of observations the probability that they would be achieved under the null hypothesis is calculated giving a p-value. The p-value is the probability that these observations would come from the assumed null model therefore a low p-value means an unlikely outcome. As the definition of a p-value is how unlikely the outcome is to be achieved when the null hypothesis is true, for a true null hypothesis where the tested observations are continuous the p-values are expected to follow the distribution p U(0, 1).

22 8 CHAPTER 2. BACKGROUND In the case that the observations are discrete the possible p-values will be finite and not necessarily uniformly distributed, but with an increasing number of possible outcomes the distribution will converge towards the uniform distribution conditional on that the null hypothesis being correct. 2.3 Likelihood ratio tests A common method for comparing the goodness of fit of two models is the likelihood ratio(lr) test. It compares a null model against an alternative model where the null model is a special case of the alternative model. The test then either rejects or does not reject the null model. If the null model is rejected it means that the data are unlikely to come from that model. Wilks (cf. [15]) shows that the minus two times the logarithm of the likelihood ratio is asymptotically χ 2 distributed. The Neyman-Pearson lemma (cf. [13]) states that the likelihood ratio test is the most powerful test of the goodness of fit for these models. 2.4 Stochastic differential equations A stochastic process can be described as stochastic differential equation(sde) which is written on the form ds t = µ(s t, t)dt + σ(s t, t)dw t Geometric Brownian motion An important example is the geometric Brownian motion(gbm) this SDE has an an explicit solution S t = S 0 exp ds t = µs t dt + σs t dw t, ((µ σ Heston s stochastic volatility model ) t + σw t ). (2.4) Another important example is the Heston stochastic volatility model (cf. [10]) where the volatility is a stochastic process. The SDE for the asset is ds t = µs t dt + ν t S t dw 1 t, (2.5)

23 CHAPTER 2. BACKGROUND 9 with the following SDE for the volatility and the covariation of the processes dν t = κ(θ ν t )dt + ξ ν t dw 2 t, dwt 1 dwt 2 = ρdt. This SDE has no explicit solution therefore a numerical solution has to be simulated. The process for the volatility is commonly denoted a Cox- Ingersoll-Ross process which is larger than zero if the following condition is satisfied (cf. [6]) 2κθ ξ 2. (2.6) Discretization The stochastic process is approximated with a discretization by the Euler- Maruyama method which approximates a solution of the SDE recursively by dx t = a(x t )dt + b(x t )dw t, X n+1 = X n + a(x n ) t + b(x n ) W n, where the W n are normally distributed and independent. The volatility process is then discretized by ν n+1 = κθ + (1 κ)ν n + ξ ν n W 1 n, (2.7) and the SDE for the asset (( S n+1 = S n exp µ ν ) n + ) ν n Wn 2. (2.8) 2 where the correlation between W 1 n and W 2 n is ρ and t = 1. If the condition (2.6) is satisfied ν n should theoretically be positive, however because of discretization errors it could become negative which as (2.7) and (2.8) uses the square root of ν n would be problematic. Because of this ν n+1 is taken as ν n+1 = max(0, κθ + (1 κ)ν n + ξ ν n W 1 n). so for negative values the volatility are just taken as zero instead. This should not be major problem if (2.6) is satisfied as the values in theory should be positive.

24 10 CHAPTER 2. BACKGROUND 2.5 Backtesting risk measures The process of evaluating the performance of a risk measure is often called backtesting. This is a forecast evaluation applied with the risk measure as the forecast and the realised loss as the outcome. There are two reasons why one may want to do this. Either you want to assess the accuracy of the model in which case you generally look at two-sided tests, or you want to make sure that the risk estimates are not too low in which case you generally want to use a one-sided tests. The latter is commonly used for regulatory purposes as you are then more interested in that the estimated risk not being too low but do not care if the estimate of the risk is to high. But the former is more interesting if you want to assess how accurate the modelling of the risk measure is. For financial institutions is it of course also important that the risk is not overestimated. 2.6 Backtesting Value-at-Risk The most common way to backtest VaR is to evaluate it as an interval forecast instead of a point forecast. This is generally done by defining the sequence I t = { 0 if Lt VaR t, 1 if L t > VaR t. (2.9) If the VaR model is correct the I t should be independent and identically distributed stochastic variables with a Bernoulli distribution I t Be(α) t. This gives two important properties of the sequence that should be fulfilled. Firstly, that the number of exceedances of the VaR is correct and, secondly, that the exceedances are independent of each other. The first property is often called unconditional coverage while the second property is called independence. The combined property is called conditional coverage. In general, methods, especially earlier methods have been focused on the first property but the second property is also important. In a more general sense the first property assures that the number of large losses is not too high whilst the second property assures that large losses do not occur too clustered together. If losses occur clustered the aggregate loss over a period could become high. For example if you have large losses every day in a week the aggregate loss for the entire week could be unacceptably high. A dependence could also occur in different orders. A first order dependence would imply that the probability of an exceedance at time t depends on if there was an exceedance

25 CHAPTER 2. BACKGROUND 11 at time t 1. A second order dependence would imply that the probability of an exceedance at time t depends on if there were exceedances at times t 1 and t 2. Generally lower order dependence is more interesting to account for. As VaR is elicitable it should therefore also be possible to evaluate it as a point forecast but this is not something that is commonly done. What can be seen is that the scoring function in (2.3) looks quite similar to the indicator function in (2.9). The indicator function would however not constitute a valid scoring function as it is not continuous at L t = VaR t and is zero at other points than L t = VaR t Using a binomial model An early but often used backtest framework was proposed by Kupiec (cf. [11]) which relies on the fact that the sum of Bernouilli distributed variables is binomially distributed. Forming the sum Y = N I t, t=1 for the model to be correct Y should come from the following distribution Y = Bin(N, α). This method accounts only for unconditional coverage. So any interactions of the exceedances are not accounted for. However the method currently used for regulatory purposes is based on this method Using a Markov chain model Christoffersen (cf. [5]) proposes that the first order interactions could be modeled as a first order Markov chain with the following transition matrix assuming that the model is correct [ ] 1 α α Π =, 1 α α he also designs LR tests for the unconditional coverage, independence and conditional coverage. The LR test for unconditional coverage tests H 0 : E[I t ] = α,

26 12 CHAPTER 2. BACKGROUND against H 1 : E[I t ] α. giving the the likelihood under the null hypothesis L = (α; I 1, I 2,..., I T ) = (1 α) n 0 α n 1, and the likelihood under the alternative L(π, I 1, I 2,..., I T ) = (1 π) n 0 π n 1. The LR test for unconditional coverage can then be formulated as LR uc = 2log[L(α; I 1, I 2,..., I T )/L(ˆπ, I 1, I 2,..., I T )] asy χ 2 (1). where ˆπ = n 1 n 0 +n 1 is the maximum likelihood estimate of π. The LR test of independence uses the Markov chain approximation. With the transition probability matrix [ ] 1 π01 π Π 1 = 01, 1 π 11 π 11 where π ij = P (I t = j I t 1 = i). The approximate likelihood function for the process is L(Π 1 ; I 1, I 2,..., I T ) = (1 π 01 ) n 00 π n (1 π 11 ) n 10 π n The maximization of the log-likelihood function is then [ n00 n 01 ] n Π 1 = 00 +n 01 n 00 +n 01. n 10 n 00 +n 01 n 11 n 00 +n 01 Independence corresponds to [ ] 1 π2 π Π 2 = 2, 1 π 2 π 2 with the likelihood under the null hypothesis L(Π 2 ; I 1, I 2,..., I T ) = (1 π 2 ) (n 00+n 10 ) π (n 01+n 11 ) 2. The LR test of independence follows the following distribution LR ind = 2log[L(ˆΠ 2 ; I 1, I 2,..., I T )/L(ˆΠ 1 ; I 1, I 2,..., I T )] asy χ 2 (1). The LR tests for unconditional coverage and independence can be combined to a joint test for conditional coverage by LR cc = LR uc + LR ind,

27 CHAPTER 2. BACKGROUND 13 following the distribution LR cc asy χ 2 (2). (2.10) This method is more complex than the method proposed by Kupiec, but it also accounts for more possible misspecifications in the risk model. Interactions are something that you would expect to see in models that do not fully account for the stochastic volatility. If you assume a constant volatility you will get more exceedances when the volatility is high and less when the volatility is low. This makes the independence property interesting to account for as it is probable that many risk models do not fully account for stochastic volatility. Testing only for the unconditional coverage should give the same result as the binomial model and Kupiec derives the same LR test as Christoffersen s unconditional coverage test. One drawback of this method is that it cannot be tested with a two sided test. Often you are only interested in whether the risk model underestimate the risk not if it overestimates it. The Basel Committee s traffic light test is an example of a test which is based on a one-sided test that is based on the binomial model. A model accounting for higher order interactions could be constructed but would be even more complex and therefore not likely to be reasonable to implement. The effect of these interactions would probably also be less important than the first order interactions as it is reasonable to assume that checking for first order interactions should cover most of the instances of clustering. 2.7 Conclusion To quantify risk is crucial in the financial industry. The most widely used measure is Value-at-Risk followed by Expected shortfall. The main drawbacks with VaR is that it lacks a property called subadditivity and that it does not capture tailrisk which makes ES a better alternative from a theoretical viewpoint. An important procedure when measuring risk is to backtest your risk measures, that is when you compare the estimate to the outcome to investigate how accurate the estimate was. The main advantage of VaR is that it is easy to estimate and to backtest. While for ES there has been no consensus regarding how to backtest it.

28 14 CHAPTER 2. BACKGROUND When backtesting there are two important properties to consider unconditional coverage and independence. Unconditional coverage is that the probability of an exceedance is correct. Independence is that the probability of an exceedance is not dependent of earlier exceedances. Generally backtests have been focused on unconditional coverage but independence is also an important property. In financial data you often see volatility clustering therefore if the risk model does not account for the stochastic volatility you would expect to see a clustering of the VaR-exceedances.

29 Chapter 3 Theoretical framework 3.1 Backtesting expected shortfall As pointed out in Acerbi and Szekely (cf. [2]) the discovery that ES is not elicitable made by Gneiting (cf. [9]) in 2011 sparked a somewhat confused debate concering whether or not ES could be backtested at all. While this makes it impossible to backtest it directly as a point forecast it is possible to evaluate it in other ways. As pointed out earlier the primary way to backtest VaR is to evaluate it as an interval forecast not a point forecast. As ES is not a pure interval this is however not directly applicable. Therefore the backtest have to be designed using some other forecast evaluation technique. 3.2 Backtest 1 When the ES is defined as an integral of VaR as in equation (2.1) the integral can be approximated as a sum of different VaR-levels. Which gives the approximation ES α (X) = 1 α α 0 VaR u (X)du 1 N N k=1 VaR kα (X). N Emmer, Kratz and Tasche (cf. [7]) suggests using the approximation ES α 1 4 (VaR α(x) + VaR 0.75α (X) + VaR 0.5α (X) + VaR 0.25α (X)). However, why this particular approximation should be used is not specified. The approximation corresponds to a leftpoint rectangular approximation of 15

30 16 CHAPTER 3. THEORETICAL FRAMEWORK the integral. We use the approximation ES α 1 5 (VaR α(x) + VaR 0.8α (X) + VaR 0.6α (X)+ + VaR 0.4α (X) + VaR 0.2α (X)). (3.1) This approximation gives more even VaR-levels when used on ES and ES This could be beneficial when used on VaR modeled with historical simulation. These levels should be tested jointly. Applying the Markov chain model by Christoffersen this would be modeled as a 6 state Markov chain. The problem with this approach is that the transition probabilities between two different levels of VaR-exceedances would be very low requiring a huge amount of data to test accurately. So that method is not considered in this thesis. The solution would be to test the levels independently or to test only the unconditional coverage jointly. A problem with testing the levels independently is that they are clearly not independent and it would be hard to account for this in the testing. It would be hard to determine what accuracy you have in your test and it would therefore be difficult to justify a high enough significance because of the high risk of type I errors. There are methods for combining multiple tests but they often assume independence between the tests, which is unrealistic in this scenario. The correlation is probably higher for VaR-levels that are closer to each other. For example 0.5% VaR and 1% VaR is probably more correlated than 1% VaR and 5% VaR. If the tests are assumed to be independent Fisher s method (cf. [8]) could be used k χ 2 2k = 2 ln(p i ). On the other hand taking the minimum p-value as the combined p-value would be reasonable if the values are highly correlated. The correlation matrix for the VaR-exceedances of a correct VaR-model with levels 2.5%, 2%, 1.5%, 1% and 0.5% are approximately i=1

31 CHAPTER 3. THEORETICAL FRAMEWORK 17 So it is reasonable to assume that they will be highly correlated. The backtest used in this thesis will be to take the p-value of the combined test as the minimum p-value where the individual p-values of the test will be calculated with Christoffersen s method described in (2.10). As the tests will be based on the discrete sequence (2.9) the possible p-values will be finite however the possible outcomes will be large enough that they are considered approximately continuous. 3.3 Backtest 2 Testing the unconditional coverage jointly with the approximation method would mean that instead of testing against a binomial distribution as in the method by Kupiec you test against a multinomial distribution. With the given approximation you divide the losses I t = 0 if L t VaR t, α, n 1 if VaR t, α, < L t VaR n t, 2α,. n 1 if VaR (n 1)α t,, < L t VaR t,α, n n if VaR t,α, < L t. This is tested for the hypothesis that the probability distribution of I t is P (I t = 0) = 1 α P (I t = 1) = P (I t = 2) = = P (I t = n) = α n. (3.2) This is done with a Pearson chi-squared test. Let X m be the number of times I t = m, T be the total number of occurrences and P i the probability of event i. The test statistic is then given by n D = n i=0 (X i T P i ) 2 T P i, which approximately follows a chi-square distribution with n degrees of freedom D χ 2 n. The cumulative distribution function of the χ 2 -distribution gives the p-value of the test. A rejection of the hypothesis means that the ES model can be assumed to be flawed with that confidence. For example if the p-value is less than 5% we can say with 95% certainty that the ES model is flawed. A consideration has to be made regarding the number of VaR-levels tested.

32 18 CHAPTER 3. THEORETICAL FRAMEWORK Increasing the number of VaR-levels will make the approximation in (3.1) more correct but will decrease the power of the test in (3.2). One drawback with this method is that it is impossible to design a onesided test. From a regulatory perspective this makes this test less interesting but it could still be interesting from an internal model validation perspective. Another drawback is that this method cannot account for conditional coverage, it only accounts for unconditional coverage in the same manner as the Kupiec test does for the backtest of VaR. As for backtest 1 the sequence (3.3) that the test is based on is discrete. Also in this case the amount of possible outcomes is considered large enough to be considered approximately continuous when looking at the resulting p-values. 3.4 Backtest 3 As pointed out by Acerbi and Szekely (cf. [2]) one way to design the backtest is to use the conditional elicitability of ES. In this method the VaR is first backtested and then the magnitude of the losses in case the loss is larger than the VaR is compared to the ES. Using ES on the conditional probability form in (2.2) it can be rewritten as E[ES α,t L t VaR α,t L t < 0] = 0. This gives for t such that V ar α,t L t < 0 the following hypothesis is tested against H 0 : E[ES α,t L t ] = 0, H 1 : E[ES α,t L t ] < 0. This hypothesis is tested with a paired difference test, which means that the difference of the magnitudes and losses on periods with VaR-exceedances. This difference follows a Student s t-distribution for which the hypothesis that the difference is zero is tested against the hypothesis that non-zero. For the VaR-exceedances you form D = ES α,t L t, which is assumed to follow a Student s t distribution. The distribution of D may be skewed because it is upward bounded by ES V ar but not

33 CHAPTER 3. THEORETICAL FRAMEWORK 19 downward bounded. This would violate the Student s t assumption of the distribution D. If this is a problem the results will show that. As the test in this method also relies on the underlying VaR model you should evaluate this model and aggregate the p-value with the p-value of the test for the magnitude of the ES. However, in this thesis however this is not done and this test will make the assumption that the VaR-model is correct. This is in order to be able to see how this test performs without the VaR-test that would probably perform better. 3.5 Backtest 4 A fourth method is to look at the difference between the expected ES and the loss for each time point. This will be similar to the third method but instead of calculating the interval to see which losses are among the α% largest the α% largest losses are used. Important to remember here is that each loss will be the outcome of a different unknown probability distribution. This makes comparing which losses are the most unlikely less straightforward. But taking the α% largest of D = ES t L t, would be a reasonable approximation. This will have a higher probability to reject the ES for being too small than for being too large. The main problem will arise when the ES varies in size, which is reasonable to expect it to do because of the volatility being stochastic and not constant. The differences are then tested in the same way as backtest 3. An advantage of this method is that it does not require the underlying VaR model to be tested. 3.6 Conclusion Some authors claims that the lack of elicitability is a significant shortcoming of ES as a risk measure. However, this is not true. The main drawback compared to VaR is that VaR is a pure interval forecast which is not that difficult to backtest. ES is a more complicated forecast and there is not that much literature on the evaluation of similar forecasts. It is more difficult to backtest, but that is not connected to its lack of elicitability. As the ES is not directly backtestable some other approach has to be made. In this thesis two different types of methods are investigated, one which relies on testing the magnitude of the losses larger than the cut-off level and

34 20 CHAPTER 3. THEORETICAL FRAMEWORK another one which relies on an approximation of the ES into different levels of VaR. In the second method one also have to decide how the tests of different levels should be aggregated into a single test. When choosing how to evaluate the accuracy of ES it is important to first decide what you want to achieve with the evaluation, do you want to make sure that the ES estimate is not to low or do you want to make sure that it is close to the actual ES. Generally the methods which rely on using the magnitude of the ES will require much more data. This is because for ES at level α you will only be able to use α% of your sample to evaluate the magnitude of the ES. A commonly proposed level of the ES is 2.5%. This means that for every 200 observations you would be able to use 5 to evaluate the magnitude. So if you only have a few hundred data points the statistical power to reject an incorrect model will be low. This drawback means that these types of models are unlikely to be implemented in practice. Also if a suitable backtest for ES is found it will probably be much weaker than the backtest for VaR given the same data. Meaning that backtest models for ES will be far less competent at separating a good ES model from a bad than a VaR backtest. This means that you would get an increase in either the false positives or false negatives. This could potentially be a big problem if ES were to be used as the primary risk measure.

35 Chapter 4 Data Simulated data are used to test the methods. The reason for this is to be able to evaluate the performance of the backtests independent of the performance of the methods estimating ES and VaR. If you use real data you do not know if the ES-models are correct and therefore you cannot know which models that should be rejected. So for a given rejected ES-model you do not know if it is a correct rejection of a model that is incorrect or an incorrect rejection of a correct ES-model. Even with simulated data you would not know which models that are correct and incorrect, you would only know the probability that a given model is incorrect. 4.1 Returns Asset paths The data that forms the basis for the estimation of risk are daily asset prices denoted S t for the asset price at time t. Three different types of models are used to simulate the asset prices. The first is a GBM model that corresponds to log-normally distributed asset returns, a stochastic process where the normal distribution is changed to a Student s t distribution and a stochastic volatility model. For the GBM the explicit solution given in equation (2.4) is used. This solution is then discretized to generate data corresponding to discrete times ) ) S n+1 = S n exp ((µ σ2 + σ W n, 2 where W n are normally distributed and t = 1. Calculating ES with a parametric normal method should give a correct answer for this model. For 21

36 22 CHAPTER 4. DATA the simulated data the parameters are µ = 10 5 σ = 10 4 The simulated data for N = 1000 are plotted in figure 4.1. Figure 4.1: 500 discretized GBM paths with 1000 time steps For the second model the same discretization as the GBM is used but instead of assuming that the increments are normally distributed the increments are assumed to follow the Student s t-distribution. The resulting discrete stochastic process would not correspond to a valid continuous process as the sum of two Student s t-distributed stochastic variables is not a Student s t-distributed variable. But we are only interested in the discrete case so that does not pose a problem. As the Student s t-distribution has a fatter tail than the normal distribution a parametric model that assumes a normal distribution should underestimate the ES when used on these data. To get a good comparison to the GBM data the same expected value and standard deviation are used for the Student s t distribution, the degrees of freedom are set to 4 giving the parameters ν 2 µ = 10 5 ν = 4 σ = 10 4 = ν

37 CHAPTER 4. DATA 23 The simulated data for N = 1000 are plotted in figure 4.2. Figure 4.2: 500 Student s t paths with 1000 time steps For the stochastic volatility model the asset prices are simulated with the Heston model where the asset prices follow the SDE given in equation (2.5). Asset prices following this process should get a higher grouping of large returns. There is also a negative correlation between the volatility process and the asset process meaning that large volatility and negative returns are more likely to occur at the same time. The parameters are set to κ = 0.02 θ = 10 8 ξ = 10 5 ρ = 0.5 S 0 = 1 ν 0 = 10 8 µ = 10 5 Looking at the condition in (2.6) which assures that the volatility is positive we have 2κθ = = 4 1, ξ 2 (10 5 ) 2 so the condition is fulfilled Log-returns From the time series S t log-returns are calculated by the formula ( ) St r t = log, S t 1

38 24 CHAPTER 4. DATA Figure 4.3: 500 discretized paths from the Heston volatility model with 1000 time steps where log is the natural logarithm. These log-returns are then used to estimate the risk. The main reason for using log-returns is if the underlying asset follows a geometric Brownian motion the log-returns will be normally distributed. The assumtion that log-returns are independent is also much more reasonable than making the same assumption for asset values or absolute returns. 4.2 VaR To calculate VaR you must decide on a method and an estimation window of historical data n. This n will correspond to the number of historical returns you use in your estimation of the risk measure. A longer time period gives a more reliable estimate but you will also risk using older data that are not that relevant Historical simulation In the historical simulation approach you form the historical distribution with the n last daily losses and assume that the loss the next day will come

39 CHAPTER 4. DATA 25 from this distribution P (L t = L t 1 ) = P (L t = L t 2 ) = = P (L t = L t n ) = 1 n, then the losses are sorted and the VaR is calculated as L 1 L 2 L n, V ar = L αn +1. This means that you need at least 1 observations to reasonably estimate VaR α this way. You also get a large degree of uncertainty when 1 is close to n as the α VaR estimate will jump when 1 = n. For example for a = 2 α but (0.01 δ) = 1 for any δ. So you can get a large change in the VaR with small changes in for example α or n Parametric method In a parametric method you assume a parametric distribution that the data follows and estimates the parameters from the n returns. If you assume that the log-returns are normally distributed this would mean that you calculate the expected value and standard deviation µ t = E[r i ] σ t = SD[r i ] i [t n, t 1] where i is an integer. You then assume that the log-return follows the distribution r t N(µ t, σ t ), using this the loss distribution can be calculated which gives the VaR from the cumulative distribution function F of the stochastic variable L t V ar t,α = F L (1 α). For the Student s t distribution the degrees of freedom are set to 4 when estimating the distribution. It would be interesting to estimate all three parameters but with the number of distributional fittings made it would not be possible as it is much more computationally intensive. 4.3 ES The estimations of ES are similar to the estimation of VaR for both methods.

40 26 CHAPTER 4. DATA Historical simulation The losses are sorted in the same way and the ES is calculated as the mean of the losses smaller than the VaR ÊS α,t = 1 ( α αn ) αn L k L αn +1 +, α n n which when αn is an integer simplifies to ÊS α,t = 1 αn αn which is just the average of the α% largest losses. From this it can be seen that an estimation of ES with this method requires more data than the same method for estimation of VaR. When 1 < n the estimation of the ES and α the VaR will be the same. It will however not experience as large jumps as it is a mean of multiple values. k=1 L k, k= Parametric methods To estimate the ES with a parametric method you use the same distributional assumption and parameter estimation as for the VaR estimation. Using the integral definition of ES given in (2.1) we get ES α (X) = 1 α α 0 V ar u (X)du = 1 α 1 1 α F 1 L (u)du, from this you can then derive the ES for the normal method ( ) ES α,t (X) = S t 1 1 Φ(Φ 1 (α) σ)e µ+σ2 /2, α and the ES for the Student s t method ES α,t = S t 1 ( 1 1 α α 0 ) e µ+σt 1 ν (u) du. For the normal method this can be solved analytically but for the Student s t method this have to be evaluated with numerical integration.

41 CHAPTER 4. DATA Creating samples for evaluation When creating the sample for comparison you will start with N sampled asset prices. You choose an estimation window n for the estimation of the VaR or ES. You then estimate the VaR at time t using the log-returns from the time periods t n, t n + 1,, t 1. You do this for the time periods t = n + 1 to t = N giving a total of N n 1 VaR or ES estimations. You then pair them with the losses from the same time period giving the samples ( V ar α,t, L t ) and (ÊS α,t, L t ). On this paired sample you use the methods described in chapter 3. So for a sample of N observed returns using a estimation window of n the backtesting will be based on N n 1 paired losses and risk measure estimates. As can be seen from this overlapping samples will be used, if it were possible non-overlapping samples would be preferred. This would however require unrealistically long samples. Especially for the historical simulation where the estimation window n is short and the α is low this could be a problem. The historical simulation only uses the tail values for the calculation meaning that the estimate will only change when the new loss exceeds the previous VaR or a return that is excluded because it is to far back is larger or equal to the VaR. For the parametric method this is a smaller problem. The distribution is fitted from all the values therefore the estimate will change as long as the newest value is not exactly the same as the last excluded value. The rounding in the historical simulation method for VaR could also be a problem. The calculation αn + 1 will be rounded to an integer. The probability that you actually estimate is therefore αn + 1, (4.1) n with α = 100 and n = 100 this gives an estimated probability of 2% but with the same α and n = 99 the estimated probability is 1.01%. For the test that relies on using multiple VaR levels described in section 3.2 this could also cause problems if the rounding is uneven or even that different levels round to the same VaR. This could however only occur when the following is true for the difference between the VaR levels α < 1 n,

42 28 CHAPTER 4. DATA so for example if the difference between VaR-levels is 0.5% you need at least 200 returns to be sure that you not encounter the problem of VaR-levels rounding to the same estimate of the VaR. Two different n will be used 199 and 299. That 199 and not 200 is used is because of the rounding described in equation (4.1). Three N will be used 500, 1000 and is an unrealistically long sample but it is interesting to see what happens with more data as it makes the testing more accurate. 500 and 1000 are probably more realistic lengths in practice. Two levels of α is used 2.5% and 5%. As for n = 199 and α = αn = = < 1, this is checked. You will get the following outcomes for the levels 0.025, 0.020, 0.015, and = = = = = 1, so the eventual problem with rounding to the same VaR estimate is not present. 4.4 Summary The data sample is constructed in such a manner as to be close to a realistic scenario but at the same time have all the characteristics known. If the characteristics of the data are not known it would be impossible to know what is a good performance of the backtest. Three types of data are simulated with different characteristics to see how the backtest methods react to different types of data. Both data that it is assumed to accept and data that it is assumed to reject is used. Different lengths of the samples are also used to see how the methods performs with different amounts of data. It is interesting to see both how much data are needed to get good results and to see what the result is when a lot of data are used.

43 Chapter 5 Results 5.1 Introduction to the results When performing the simulations of the backtests the form of the results are multiple p-values. For each different combination of backtest 500 p-values will be calculated. From these p-values conclusions concerning the accuracy of the backtest are drawn. This is done with a qq-plot of the p-values against the quantiles of a uniformly distributed stochastic variable between 0 and 1. If the hypothesis is correct the p-values should follow that distribution. This will be the same as plotting p-values as a function of their quantile, this will be compared to the function f(p) = p. If the p-values are lower than the reference it will mean that more than the expected number of backtested models will be rejected for a given confidence of the backtest. This would indicate that the models are incorrect. This is something you would expect if you know that the models you are testing are wrong. If the p-values are higher than the reference it is also an indicator that something is not right. One could mistake this for a very good model. This is however not the case rather it is an indication that the backtest is too weak as you expect unlikely outcomes to appear with small probability. The chance that you in 500 tries would not achieve an outcome that should be rejected at the 5% level just by chance is very low. With four different methods, three different types of data, two different significance levels and two difference estimation windows the total number of resulting plots would be 48. Therefore only the most interesting results will be shown in this section. The rest of the results will be put in an appendix. 29

44 30 CHAPTER 5. RESULTS 5.2 Geometric Brownian Motion For the results on the GBM data the plots will discussed in more detail as they cover the different possibilities quite well, for example a non-parametric model, a parametric model with the right parametric assumption and a parametric model with the wrong parametric assumption Method 1 and 2 Historical simulation In figure 5.1 the method for the estimation of the risk measures is historical simulation which we expect to give a correct estimation of the ES. We see that method 1 works well except for the combination of α = and n = 299. When this combination is used you get a rounding for the VaRlevels 2.5%, 1.5% and 0.5% which is described in section The rounding is not going to be that big but when taking a large amount of data the error will be enough to reject many hypotheses with a high degree of confidence as the unconditional coverage will be wrong. Also in figure 5.1 we have method 2 on GBM data with historical simulation, which we expect to be correct. Here we see a low level of rejections except when α = and n = 299. The reason that we get a high chance of rejection of the hypothesis is due to the same problem with rounding as in method 1. For the VaR level 0.5% you will get the second worst outcome and for the VaR level 1% you will get the third worst outcome. The hypothesis assumes that the outcomes above 0.5% VaR and between 1% VaR and 0.5% VaR is equally likely but the rounding makes above 0.5% VaR twice as likely giving very low p-values. Parametric normal method In figure 5.2 method 2 is used on GBM data with the parametric normal method that we expect to be correct. Here we see that the p-values are close to the expected levels especially with a large amount of data, but even with less data the levels are close to the expected levels at least for α = We can also see that the assumption that the amount of possible p-values is large enough for it to be approximately continuous does not seem to hold for α = as the p-values on some levels are constant. Also in figure 5.2 the method used for the risk measure estimation is the parametric normal method which we expect to give a correct estimation of

45 CHAPTER 5. RESULTS 31 the ES. We see that with a low amount of data the p-values are quite close to the expected but with a large amount of data the p-values will be lower than expected. The number of rejected hypotheses is in the order 2-3 times the expected. This is not that surprising as the aggregation of the p-values by taking the minimum gives a large rejection region. Parametric Student s t method In figure 5.3 we have method 2 on the GBM data with the parametric Student s t method that we expect to be wrong. Here we see a large amount of rejections even with a low amount of data. With α = 0.05 and N = 5000 the p-values are even displayed as zero. Also in figure 5.3 the method used for the risk measure estimation is the parametric Student s t method which we expect to overestimate the ES. We see that for α = 0.05 the model rejects the hypothesis with fewer data points but for α = more data points are required for a high rejection. Both models get a very high rejection with much data. The reason that some of the lines are straight in these plots is because a common outcome will be to have zero VaR-exceedances because the Student s t model will give a high VaR. But with a correct model these outcomes would be very uncommon.

46 32 CHAPTER 5. RESULTS Figure 5.1: All plots show the p-values of the hypothesis on the y-axis and quantile of the p-value on the x-axis. The data in all plots come from a geometric Brownian motion and the method used for estimating the risk measures is the historical simulation. The method, α and estimation window for the risk measures are given by the title of the plot.

Backtesting Expected Shortfall: the design and implementation of different backtests. Lisa Wimmerstedt

Backtesting Expected Shortfall: the design and implementation of different backtests. Lisa Wimmerstedt Backtesting Expected Shortfall: the design and implementation of different backtests Lisa Wimmerstedt Abstract In recent years, the question of whether Expected Shortfall is possible to backtest has been

More information

Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals

Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg :

More information

Backtesting Trading Book Models

Backtesting Trading Book Models Backtesting Trading Book Models Using Estimates of VaR Expected Shortfall and Realized p-values Alexander J. McNeil 1 1 Heriot-Watt University Edinburgh ETH Risk Day 11 September 2015 AJM (HWU) Backtesting

More information

An implicit backtest for ES via a simple multinomial approach

An implicit backtest for ES via a simple multinomial approach An implicit backtest for ES via a simple multinomial approach Marie KRATZ ESSEC Business School Paris Singapore Joint work with Yen H. LOK & Alexander McNEIL (Heriot Watt Univ., Edinburgh) Vth IBERIAN

More information

NON-PARAMETRIC BACKTESTING OF EXPECTED SHORTFALL

NON-PARAMETRIC BACKTESTING OF EXPECTED SHORTFALL DEGREE PROJECT IN MATHEMATICS, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2017 NON-PARAMETRIC BACKTESTING OF EXPECTED SHORTFALL PATRIK EDBERG BENJAMIN KÄCK KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ENGINEERING

More information

Financial Risk Forecasting Chapter 4 Risk Measures

Financial Risk Forecasting Chapter 4 Risk Measures Financial Risk Forecasting Chapter 4 Risk Measures Jon Danielsson 2017 London School of Economics To accompany Financial Risk Forecasting www.financialriskforecasting.com Published by Wiley 2011 Version

More information

Assessing Value-at-Risk

Assessing Value-at-Risk Lecture notes on risk management, public policy, and the financial system Allan M. Malz Columbia University 2018 Allan M. Malz Last updated: April 1, 2018 2 / 18 Outline 3/18 Overview Unconditional coverage

More information

3.1 Itô s Lemma for Continuous Stochastic Variables

3.1 Itô s Lemma for Continuous Stochastic Variables Lecture 3 Log Normal Distribution 3.1 Itô s Lemma for Continuous Stochastic Variables Mathematical Finance is about pricing (or valuing) financial contracts, and in particular those contracts which depend

More information

Homework Assignments

Homework Assignments Homework Assignments Week 1 (p. 57) #4.1, 4., 4.3 Week (pp 58 6) #4.5, 4.6, 4.8(a), 4.13, 4.0, 4.6(b), 4.8, 4.31, 4.34 Week 3 (pp 15 19) #1.9, 1.1, 1.13, 1.15, 1.18 (pp 9 31) #.,.6,.9 Week 4 (pp 36 37)

More information

IEOR E4602: Quantitative Risk Management

IEOR E4602: Quantitative Risk Management IEOR E4602: Quantitative Risk Management Basic Concepts and Techniques of Risk Management Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com

More information

The Black-Scholes Model

The Black-Scholes Model The Black-Scholes Model Liuren Wu Options Markets Liuren Wu ( c ) The Black-Merton-Scholes Model colorhmoptions Markets 1 / 18 The Black-Merton-Scholes-Merton (BMS) model Black and Scholes (1973) and Merton

More information

The Fundamental Review of the Trading Book: from VaR to ES

The Fundamental Review of the Trading Book: from VaR to ES The Fundamental Review of the Trading Book: from VaR to ES Chiara Benazzoli Simon Rabanser Francesco Cordoni Marcus Cordi Gennaro Cibelli University of Verona Ph. D. Modelling Week Finance Group (UniVr)

More information

Dependence Modeling and Credit Risk

Dependence Modeling and Credit Risk Dependence Modeling and Credit Risk Paola Mosconi Banca IMI Bocconi University, 20/04/2015 Paola Mosconi Lecture 6 1 / 53 Disclaimer The opinion expressed here are solely those of the author and do not

More information

SOLVENCY AND CAPITAL ALLOCATION

SOLVENCY AND CAPITAL ALLOCATION SOLVENCY AND CAPITAL ALLOCATION HARRY PANJER University of Waterloo JIA JING Tianjin University of Economics and Finance Abstract This paper discusses a new criterion for allocation of required capital.

More information

The Black-Scholes Model

The Black-Scholes Model The Black-Scholes Model Liuren Wu Options Markets (Hull chapter: 12, 13, 14) Liuren Wu ( c ) The Black-Scholes Model colorhmoptions Markets 1 / 17 The Black-Scholes-Merton (BSM) model Black and Scholes

More information

Characterization of the Optimum

Characterization of the Optimum ECO 317 Economics of Uncertainty Fall Term 2009 Notes for lectures 5. Portfolio Allocation with One Riskless, One Risky Asset Characterization of the Optimum Consider a risk-averse, expected-utility-maximizing

More information

Comparison of Estimation For Conditional Value at Risk

Comparison of Estimation For Conditional Value at Risk -1- University of Piraeus Department of Banking and Financial Management Postgraduate Program in Banking and Financial Management Comparison of Estimation For Conditional Value at Risk Georgantza Georgia

More information

Capital Allocation Principles

Capital Allocation Principles Capital Allocation Principles Maochao Xu Department of Mathematics Illinois State University mxu2@ilstu.edu Capital Dhaene, et al., 2011, Journal of Risk and Insurance The level of the capital held by

More information

arxiv: v1 [q-fin.rm] 15 Nov 2016

arxiv: v1 [q-fin.rm] 15 Nov 2016 Multinomial VaR Backtests: A simple implicit approach to backtesting expected shortfall Marie Kratz, Yen H. Lok, Alexander J. McNeil arxiv:1611.04851v1 [q-fin.rm] 15 Nov 2016 Abstract Under the Fundamental

More information

Monte Carlo Methods for Uncertainty Quantification

Monte Carlo Methods for Uncertainty Quantification Monte Carlo Methods for Uncertainty Quantification Mike Giles Mathematical Institute, University of Oxford Contemporary Numerical Techniques Mike Giles (Oxford) Monte Carlo methods 2 1 / 24 Lecture outline

More information

Model Risk of Expected Shortfall

Model Risk of Expected Shortfall Model Risk of Expected Shortfall Emese Lazar and Ning Zhang June, 28 Abstract In this paper we propose to measure the model risk of Expected Shortfall as the optimal correction needed to pass several ES

More information

2 Modeling Credit Risk

2 Modeling Credit Risk 2 Modeling Credit Risk In this chapter we present some simple approaches to measure credit risk. We start in Section 2.1 with a short overview of the standardized approach of the Basel framework for banking

More information

IEOR E4703: Monte-Carlo Simulation

IEOR E4703: Monte-Carlo Simulation IEOR E4703: Monte-Carlo Simulation Simulating Stochastic Differential Equations Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com

More information

MEASURING TRADED MARKET RISK: VALUE-AT-RISK AND BACKTESTING TECHNIQUES

MEASURING TRADED MARKET RISK: VALUE-AT-RISK AND BACKTESTING TECHNIQUES MEASURING TRADED MARKET RISK: VALUE-AT-RISK AND BACKTESTING TECHNIQUES Colleen Cassidy and Marianne Gizycki Research Discussion Paper 9708 November 1997 Bank Supervision Department Reserve Bank of Australia

More information

Calculating VaR. There are several approaches for calculating the Value at Risk figure. The most popular are the

Calculating VaR. There are several approaches for calculating the Value at Risk figure. The most popular are the VaR Pro and Contra Pro: Easy to calculate and to understand. It is a common language of communication within the organizations as well as outside (e.g. regulators, auditors, shareholders). It is not really

More information

MVE051/MSG Lecture 7

MVE051/MSG Lecture 7 MVE051/MSG810 2017 Lecture 7 Petter Mostad Chalmers November 20, 2017 The purpose of collecting and analyzing data Purpose: To build and select models for parts of the real world (which can be used for

More information

Expected shortfall or median shortfall

Expected shortfall or median shortfall Journal of Financial Engineering Vol. 1, No. 1 (2014) 1450007 (6 pages) World Scientific Publishing Company DOI: 10.1142/S234576861450007X Expected shortfall or median shortfall Abstract Steven Kou * and

More information

Mathematics in Finance

Mathematics in Finance Mathematics in Finance Steven E. Shreve Department of Mathematical Sciences Carnegie Mellon University Pittsburgh, PA 15213 USA shreve@andrew.cmu.edu A Talk in the Series Probability in Science and Industry

More information

A class of coherent risk measures based on one-sided moments

A class of coherent risk measures based on one-sided moments A class of coherent risk measures based on one-sided moments T. Fischer Darmstadt University of Technology November 11, 2003 Abstract This brief paper explains how to obtain upper boundaries of shortfall

More information

Business Statistics 41000: Probability 3

Business Statistics 41000: Probability 3 Business Statistics 41000: Probability 3 Drew D. Creal University of Chicago, Booth School of Business February 7 and 8, 2014 1 Class information Drew D. Creal Email: dcreal@chicagobooth.edu Office: 404

More information

European Journal of Economic Studies, 2016, Vol.(17), Is. 3

European Journal of Economic Studies, 2016, Vol.(17), Is. 3 Copyright 2016 by Academic Publishing House Researcher Published in the Russian Federation European Journal of Economic Studies Has been issued since 2012. ISSN: 2304-9669 E-ISSN: 2305-6282 Vol. 17, Is.

More information

Robustness of Conditional Value-at-Risk (CVaR) for Measuring Market Risk

Robustness of Conditional Value-at-Risk (CVaR) for Measuring Market Risk STOCKHOLM SCHOOL OF ECONOMICS MASTER S THESIS IN FINANCE Robustness of Conditional Value-at-Risk (CVaR) for Measuring Market Risk Mattias Letmark a & Markus Ringström b a 869@student.hhs.se; b 846@student.hhs.se

More information

Much of what appears here comes from ideas presented in the book:

Much of what appears here comes from ideas presented in the book: Chapter 11 Robust statistical methods Much of what appears here comes from ideas presented in the book: Huber, Peter J. (1981), Robust statistics, John Wiley & Sons (New York; Chichester). There are many

More information

Computational Finance

Computational Finance Path Dependent Options Computational Finance School of Mathematics 2018 The Random Walk One of the main assumption of the Black-Scholes framework is that the underlying stock price follows a random walk

More information

Martingales, Part II, with Exercise Due 9/21

Martingales, Part II, with Exercise Due 9/21 Econ. 487a Fall 1998 C.Sims Martingales, Part II, with Exercise Due 9/21 1. Brownian Motion A process {X t } is a Brownian Motion if and only if i. it is a martingale, ii. t is a continuous time parameter

More information

Numerical schemes for SDEs

Numerical schemes for SDEs Lecture 5 Numerical schemes for SDEs Lecture Notes by Jan Palczewski Computational Finance p. 1 A Stochastic Differential Equation (SDE) is an object of the following type dx t = a(t,x t )dt + b(t,x t

More information

Pricing and risk of financial products

Pricing and risk of financial products and risk of financial products Prof. Dr. Christian Weiß Riga, 27.02.2018 Observations AAA bonds are typically regarded as risk-free investment. Only examples: Government bonds of Australia, Canada, Denmark,

More information

STOCHASTIC VOLATILITY AND OPTION PRICING

STOCHASTIC VOLATILITY AND OPTION PRICING STOCHASTIC VOLATILITY AND OPTION PRICING Daniel Dufresne Centre for Actuarial Studies University of Melbourne November 29 (To appear in Risks and Rewards, the Society of Actuaries Investment Section Newsletter)

More information

A Regime Switching model

A Regime Switching model Master Degree Project in Finance A Regime Switching model Applied to the OMXS30 and Nikkei 225 indices Ludvig Hjalmarsson Supervisor: Mattias Sundén Master Degree Project No. 2014:92 Graduate School Masters

More information

Exact Sampling of Jump-Diffusion Processes

Exact Sampling of Jump-Diffusion Processes 1 Exact Sampling of Jump-Diffusion Processes and Dmitry Smelov Management Science & Engineering Stanford University Exact Sampling of Jump-Diffusion Processes 2 Jump-Diffusion Processes Ubiquitous in finance

More information

Course information FN3142 Quantitative finance

Course information FN3142 Quantitative finance Course information 015 16 FN314 Quantitative finance This course is aimed at students interested in obtaining a thorough grounding in market finance and related empirical methods. Prerequisite If taken

More information

Reading: You should read Hull chapter 12 and perhaps the very first part of chapter 13.

Reading: You should read Hull chapter 12 and perhaps the very first part of chapter 13. FIN-40008 FINANCIAL INSTRUMENTS SPRING 2008 Asset Price Dynamics Introduction These notes give assumptions of asset price returns that are derived from the efficient markets hypothesis. Although a hypothesis,

More information

Risk measures: Yet another search of a holy grail

Risk measures: Yet another search of a holy grail Risk measures: Yet another search of a holy grail Dirk Tasche Financial Services Authority 1 dirk.tasche@gmx.net Mathematics of Financial Risk Management Isaac Newton Institute for Mathematical Sciences

More information

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2009, Mr. Ruey S. Tsay. Solutions to Final Exam

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2009, Mr. Ruey S. Tsay. Solutions to Final Exam The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2009, Mr. Ruey S. Tsay Solutions to Final Exam Problem A: (42 pts) Answer briefly the following questions. 1. Questions

More information

Multi-Asset Options. A Numerical Study VILHELM NIKLASSON FRIDA TIVEDAL. Master s thesis in Engineering Mathematics and Computational Science

Multi-Asset Options. A Numerical Study VILHELM NIKLASSON FRIDA TIVEDAL. Master s thesis in Engineering Mathematics and Computational Science Multi-Asset Options A Numerical Study Master s thesis in Engineering Mathematics and Computational Science VILHELM NIKLASSON FRIDA TIVEDAL Department of Mathematical Sciences Chalmers University of Technology

More information

Lecture Note 8 of Bus 41202, Spring 2017: Stochastic Diffusion Equation & Option Pricing

Lecture Note 8 of Bus 41202, Spring 2017: Stochastic Diffusion Equation & Option Pricing Lecture Note 8 of Bus 41202, Spring 2017: Stochastic Diffusion Equation & Option Pricing We shall go over this note quickly due to time constraints. Key concept: Ito s lemma Stock Options: A contract giving

More information

Advanced Topics in Derivative Pricing Models. Topic 4 - Variance products and volatility derivatives

Advanced Topics in Derivative Pricing Models. Topic 4 - Variance products and volatility derivatives Advanced Topics in Derivative Pricing Models Topic 4 - Variance products and volatility derivatives 4.1 Volatility trading and replication of variance swaps 4.2 Volatility swaps 4.3 Pricing of discrete

More information

STOCHASTIC CALCULUS AND BLACK-SCHOLES MODEL

STOCHASTIC CALCULUS AND BLACK-SCHOLES MODEL STOCHASTIC CALCULUS AND BLACK-SCHOLES MODEL YOUNGGEUN YOO Abstract. Ito s lemma is often used in Ito calculus to find the differentials of a stochastic process that depends on time. This paper will introduce

More information

Risk management. VaR and Expected Shortfall. Christian Groll. VaR and Expected Shortfall Risk management Christian Groll 1 / 56

Risk management. VaR and Expected Shortfall. Christian Groll. VaR and Expected Shortfall Risk management Christian Groll 1 / 56 Risk management VaR and Expected Shortfall Christian Groll VaR and Expected Shortfall Risk management Christian Groll 1 / 56 Introduction Introduction VaR and Expected Shortfall Risk management Christian

More information

1.1 Interest rates Time value of money

1.1 Interest rates Time value of money Lecture 1 Pre- Derivatives Basics Stocks and bonds are referred to as underlying basic assets in financial markets. Nowadays, more and more derivatives are constructed and traded whose payoffs depend on

More information

Richardson Extrapolation Techniques for the Pricing of American-style Options

Richardson Extrapolation Techniques for the Pricing of American-style Options Richardson Extrapolation Techniques for the Pricing of American-style Options June 1, 2005 Abstract Richardson Extrapolation Techniques for the Pricing of American-style Options In this paper we re-examine

More information

Section B: Risk Measures. Value-at-Risk, Jorion

Section B: Risk Measures. Value-at-Risk, Jorion Section B: Risk Measures Value-at-Risk, Jorion One thing to always keep in mind when reading this text is that it is focused on the banking industry. It mainly focuses on market and credit risk. It also

More information

"Pricing Exotic Options using Strong Convergence Properties

Pricing Exotic Options using Strong Convergence Properties Fourth Oxford / Princeton Workshop on Financial Mathematics "Pricing Exotic Options using Strong Convergence Properties Klaus E. Schmitz Abe schmitz@maths.ox.ac.uk www.maths.ox.ac.uk/~schmitz Prof. Mike

More information

A new approach to backtesting and risk model selection

A new approach to backtesting and risk model selection A new approach to backtesting and risk model selection Jacopo Corbetta (École des Ponts - ParisTech) Joint work with: Ilaria Peri (University of Greenwich) June 18, 2016 Jacopo Corbetta Backtesting & Selection

More information

FV N = PV (1+ r) N. FV N = PVe rs * N 2011 ELAN GUIDES 3. The Future Value of a Single Cash Flow. The Present Value of a Single Cash Flow

FV N = PV (1+ r) N. FV N = PVe rs * N 2011 ELAN GUIDES 3. The Future Value of a Single Cash Flow. The Present Value of a Single Cash Flow QUANTITATIVE METHODS The Future Value of a Single Cash Flow FV N = PV (1+ r) N The Present Value of a Single Cash Flow PV = FV (1+ r) N PV Annuity Due = PVOrdinary Annuity (1 + r) FV Annuity Due = FVOrdinary

More information

Parameter estimation in SDE:s

Parameter estimation in SDE:s Lund University Faculty of Engineering Statistics in Finance Centre for Mathematical Sciences, Mathematical Statistics HT 2011 Parameter estimation in SDE:s This computer exercise concerns some estimation

More information

MEASURING PORTFOLIO RISKS USING CONDITIONAL COPULA-AR-GARCH MODEL

MEASURING PORTFOLIO RISKS USING CONDITIONAL COPULA-AR-GARCH MODEL MEASURING PORTFOLIO RISKS USING CONDITIONAL COPULA-AR-GARCH MODEL Isariya Suttakulpiboon MSc in Risk Management and Insurance Georgia State University, 30303 Atlanta, Georgia Email: suttakul.i@gmail.com,

More information

The mathematical definitions are given on screen.

The mathematical definitions are given on screen. Text Lecture 3.3 Coherent measures of risk and back- testing Dear all, welcome back. In this class we will discuss one of the main drawbacks of Value- at- Risk, that is to say the fact that the VaR, as

More information

IEOR 3106: Introduction to OR: Stochastic Models. Fall 2013, Professor Whitt. Class Lecture Notes: Tuesday, September 10.

IEOR 3106: Introduction to OR: Stochastic Models. Fall 2013, Professor Whitt. Class Lecture Notes: Tuesday, September 10. IEOR 3106: Introduction to OR: Stochastic Models Fall 2013, Professor Whitt Class Lecture Notes: Tuesday, September 10. The Central Limit Theorem and Stock Prices 1. The Central Limit Theorem (CLT See

More information

Market Risk and the FRTB (R)-Evolution Review and Open Issues. Verona, 21 gennaio 2015 Michele Bonollo

Market Risk and the FRTB (R)-Evolution Review and Open Issues. Verona, 21 gennaio 2015 Michele Bonollo Market Risk and the FRTB (R)-Evolution Review and Open Issues Verona, 21 gennaio 2015 Michele Bonollo michele.bonollo@imtlucca.it Contents A Market Risk General Review From Basel 2 to Basel 2.5. Drawbacks

More information

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2010, Mr. Ruey S. Tsay Solutions to Final Exam

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2010, Mr. Ruey S. Tsay Solutions to Final Exam The University of Chicago, Booth School of Business Business 410, Spring Quarter 010, Mr. Ruey S. Tsay Solutions to Final Exam Problem A: (4 pts) Answer briefly the following questions. 1. Questions 1

More information

An Application of Extreme Value Theory for Measuring Financial Risk in the Uruguayan Pension Fund 1

An Application of Extreme Value Theory for Measuring Financial Risk in the Uruguayan Pension Fund 1 An Application of Extreme Value Theory for Measuring Financial Risk in the Uruguayan Pension Fund 1 Guillermo Magnou 23 January 2016 Abstract Traditional methods for financial risk measures adopts normal

More information

2.1 Mathematical Basis: Risk-Neutral Pricing

2.1 Mathematical Basis: Risk-Neutral Pricing Chapter Monte-Carlo Simulation.1 Mathematical Basis: Risk-Neutral Pricing Suppose that F T is the payoff at T for a European-type derivative f. Then the price at times t before T is given by f t = e r(t

More information

Backtesting Trading Book Models

Backtesting Trading Book Models Backtesting Trading Book Models Using VaR Expected Shortfall and Realized p-values Alexander J. McNeil 1 1 Heriot-Watt University Edinburgh Vienna 10 June 2015 AJM (HWU) Backtesting and Elicitability QRM

More information

Probability in Options Pricing

Probability in Options Pricing Probability in Options Pricing Mark Cohen and Luke Skon Kenyon College cohenmj@kenyon.edu December 14, 2012 Mark Cohen and Luke Skon (Kenyon college) Probability Presentation December 14, 2012 1 / 16 What

More information

- 1 - **** d(lns) = (µ (1/2)σ 2 )dt + σdw t

- 1 - **** d(lns) = (µ (1/2)σ 2 )dt + σdw t - 1 - **** These answers indicate the solutions to the 2014 exam questions. Obviously you should plot graphs where I have simply described the key features. It is important when plotting graphs to label

More information

CAN LOGNORMAL, WEIBULL OR GAMMA DISTRIBUTIONS IMPROVE THE EWS-GARCH VALUE-AT-RISK FORECASTS?

CAN LOGNORMAL, WEIBULL OR GAMMA DISTRIBUTIONS IMPROVE THE EWS-GARCH VALUE-AT-RISK FORECASTS? PRZEGL D STATYSTYCZNY R. LXIII ZESZYT 3 2016 MARCIN CHLEBUS 1 CAN LOGNORMAL, WEIBULL OR GAMMA DISTRIBUTIONS IMPROVE THE EWS-GARCH VALUE-AT-RISK FORECASTS? 1. INTRODUCTION International regulations established

More information

Using Expected Shortfall for Credit Risk Regulation

Using Expected Shortfall for Credit Risk Regulation Using Expected Shortfall for Credit Risk Regulation Kjartan Kloster Osmundsen * University of Stavanger February 26, 2017 Abstract The Basel Committee s minimum capital requirement function for banks credit

More information

Stochastic Differential Equations in Finance and Monte Carlo Simulations

Stochastic Differential Equations in Finance and Monte Carlo Simulations Stochastic Differential Equations in Finance and Department of Statistics and Modelling Science University of Strathclyde Glasgow, G1 1XH China 2009 Outline Stochastic Modelling in Asset Prices 1 Stochastic

More information

Queens College, CUNY, Department of Computer Science Computational Finance CSCI 365 / 765 Fall 2017 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Computational Finance CSCI 365 / 765 Fall 2017 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Computational Finance CSCI 365 / 765 Fall 2017 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2017 14 Lecture 14 November 15, 2017 Derivation of the

More information

RISK ADJUSTMENT FOR LOSS RESERVING BY A COST OF CAPITAL TECHNIQUE

RISK ADJUSTMENT FOR LOSS RESERVING BY A COST OF CAPITAL TECHNIQUE RISK ADJUSTMENT FOR LOSS RESERVING BY A COST OF CAPITAL TECHNIQUE B. POSTHUMA 1, E.A. CATOR, V. LOUS, AND E.W. VAN ZWET Abstract. Primarily, Solvency II concerns the amount of capital that EU insurance

More information

2 Control variates. λe λti λe e λt i where R(t) = t Y 1 Y N(t) is the time from the last event to t. L t = e λr(t) e e λt(t) Exercises

2 Control variates. λe λti λe e λt i where R(t) = t Y 1 Y N(t) is the time from the last event to t. L t = e λr(t) e e λt(t) Exercises 96 ChapterVI. Variance Reduction Methods stochastic volatility ISExSoren5.9 Example.5 (compound poisson processes) Let X(t) = Y + + Y N(t) where {N(t)},Y, Y,... are independent, {N(t)} is Poisson(λ) with

More information

Estimating the Greeks

Estimating the Greeks IEOR E4703: Monte-Carlo Simulation Columbia University Estimating the Greeks c 207 by Martin Haugh In these lecture notes we discuss the use of Monte-Carlo simulation for the estimation of sensitivities

More information

A Note about the Black-Scholes Option Pricing Model under Time-Varying Conditions Yi-rong YING and Meng-meng BAI

A Note about the Black-Scholes Option Pricing Model under Time-Varying Conditions Yi-rong YING and Meng-meng BAI 2017 2nd International Conference on Advances in Management Engineering and Information Technology (AMEIT 2017) ISBN: 978-1-60595-457-8 A Note about the Black-Scholes Option Pricing Model under Time-Varying

More information

King s College London

King s College London King s College London University Of London This paper is part of an examination of the College counting towards the award of a degree. Examinations are governed by the College Regulations under the authority

More information

Risk Measurement in Credit Portfolio Models

Risk Measurement in Credit Portfolio Models 9 th DGVFM Scientific Day 30 April 2010 1 Risk Measurement in Credit Portfolio Models 9 th DGVFM Scientific Day 30 April 2010 9 th DGVFM Scientific Day 30 April 2010 2 Quantitative Risk Management Profit

More information

King s College London

King s College London King s College London University Of London This paper is part of an examination of the College counting towards the award of a degree. Examinations are governed by the College Regulations under the authority

More information

Monte Carlo Simulations

Monte Carlo Simulations Monte Carlo Simulations Lecture 1 December 7, 2014 Outline Monte Carlo Methods Monte Carlo methods simulate the random behavior underlying the financial models Remember: When pricing you must simulate

More information

Long-Term Risk Management

Long-Term Risk Management Long-Term Risk Management Roger Kaufmann Swiss Life General Guisan-Quai 40 Postfach, 8022 Zürich Switzerland roger.kaufmann@swisslife.ch April 28, 2005 Abstract. In this paper financial risks for long

More information

EFFICIENT MONTE CARLO ALGORITHM FOR PRICING BARRIER OPTIONS

EFFICIENT MONTE CARLO ALGORITHM FOR PRICING BARRIER OPTIONS Commun. Korean Math. Soc. 23 (2008), No. 2, pp. 285 294 EFFICIENT MONTE CARLO ALGORITHM FOR PRICING BARRIER OPTIONS Kyoung-Sook Moon Reprinted from the Communications of the Korean Mathematical Society

More information

Heston Model Version 1.0.9

Heston Model Version 1.0.9 Heston Model Version 1.0.9 1 Introduction This plug-in implements the Heston model. Once installed the plug-in offers the possibility of using two new processes, the Heston process and the Heston time

More information

Tangent Lévy Models. Sergey Nadtochiy (joint work with René Carmona) Oxford-Man Institute of Quantitative Finance University of Oxford.

Tangent Lévy Models. Sergey Nadtochiy (joint work with René Carmona) Oxford-Man Institute of Quantitative Finance University of Oxford. Tangent Lévy Models Sergey Nadtochiy (joint work with René Carmona) Oxford-Man Institute of Quantitative Finance University of Oxford June 24, 2010 6th World Congress of the Bachelier Finance Society Sergey

More information

Monte Carlo Methods. Prof. Mike Giles. Oxford University Mathematical Institute. Lecture 1 p. 1.

Monte Carlo Methods. Prof. Mike Giles. Oxford University Mathematical Institute. Lecture 1 p. 1. Monte Carlo Methods Prof. Mike Giles mike.giles@maths.ox.ac.uk Oxford University Mathematical Institute Lecture 1 p. 1 Geometric Brownian Motion In the case of Geometric Brownian Motion ds t = rs t dt+σs

More information

Lattice (Binomial Trees) Version 1.2

Lattice (Binomial Trees) Version 1.2 Lattice (Binomial Trees) Version 1. 1 Introduction This plug-in implements different binomial trees approximations for pricing contingent claims and allows Fairmat to use some of the most popular binomial

More information

Computer Exercise 2 Simulation

Computer Exercise 2 Simulation Lund University with Lund Institute of Technology Valuation of Derivative Assets Centre for Mathematical Sciences, Mathematical Statistics Fall 2017 Computer Exercise 2 Simulation This lab deals with pricing

More information

Limit Theorems for the Empirical Distribution Function of Scaled Increments of Itô Semimartingales at high frequencies

Limit Theorems for the Empirical Distribution Function of Scaled Increments of Itô Semimartingales at high frequencies Limit Theorems for the Empirical Distribution Function of Scaled Increments of Itô Semimartingales at high frequencies George Tauchen Duke University Viktor Todorov Northwestern University 2013 Motivation

More information

Saddlepoint Approximation Methods for Pricing. Financial Options on Discrete Realized Variance

Saddlepoint Approximation Methods for Pricing. Financial Options on Discrete Realized Variance Saddlepoint Approximation Methods for Pricing Financial Options on Discrete Realized Variance Yue Kuen KWOK Department of Mathematics Hong Kong University of Science and Technology Hong Kong * This is

More information

The Two-Sample Independent Sample t Test

The Two-Sample Independent Sample t Test Department of Psychology and Human Development Vanderbilt University 1 Introduction 2 3 The General Formula The Equal-n Formula 4 5 6 Independence Normality Homogeneity of Variances 7 Non-Normality Unequal

More information

New robust inference for predictive regressions

New robust inference for predictive regressions New robust inference for predictive regressions Anton Skrobotov Russian Academy of National Economy and Public Administration and Innopolis University based on joint work with Rustam Ibragimov and Jihyun

More information

Math 416/516: Stochastic Simulation

Math 416/516: Stochastic Simulation Math 416/516: Stochastic Simulation Haijun Li lih@math.wsu.edu Department of Mathematics Washington State University Week 13 Haijun Li Math 416/516: Stochastic Simulation Week 13 1 / 28 Outline 1 Simulation

More information

Tests for Two ROC Curves

Tests for Two ROC Curves Chapter 65 Tests for Two ROC Curves Introduction Receiver operating characteristic (ROC) curves are used to summarize the accuracy of diagnostic tests. The technique is used when a criterion variable is

More information

M.I.T Fall Practice Problems

M.I.T Fall Practice Problems M.I.T. 15.450-Fall 2010 Sloan School of Management Professor Leonid Kogan Practice Problems 1. Consider a 3-period model with t = 0, 1, 2, 3. There are a stock and a risk-free asset. The initial stock

More information

Computer Exercise 2 Simulation

Computer Exercise 2 Simulation Lund University with Lund Institute of Technology Valuation of Derivative Assets Centre for Mathematical Sciences, Mathematical Statistics Spring 2010 Computer Exercise 2 Simulation This lab deals with

More information

The stochastic calculus

The stochastic calculus Gdansk A schedule of the lecture Stochastic differential equations Ito calculus, Ito process Ornstein - Uhlenbeck (OU) process Heston model Stopping time for OU process Stochastic differential equations

More information

1.1 Basic Financial Derivatives: Forward Contracts and Options

1.1 Basic Financial Derivatives: Forward Contracts and Options Chapter 1 Preliminaries 1.1 Basic Financial Derivatives: Forward Contracts and Options A derivative is a financial instrument whose value depends on the values of other, more basic underlying variables

More information

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2011, Mr. Ruey S. Tsay. Solutions to Final Exam.

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2011, Mr. Ruey S. Tsay. Solutions to Final Exam. The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2011, Mr. Ruey S. Tsay Solutions to Final Exam Problem A: (32 pts) Answer briefly the following questions. 1. Suppose

More information

[D7] PROBABILITY DISTRIBUTION OF OUTSTANDING LIABILITY FROM INDIVIDUAL PAYMENTS DATA Contributed by T S Wright

[D7] PROBABILITY DISTRIBUTION OF OUTSTANDING LIABILITY FROM INDIVIDUAL PAYMENTS DATA Contributed by T S Wright Faculty and Institute of Actuaries Claims Reserving Manual v.2 (09/1997) Section D7 [D7] PROBABILITY DISTRIBUTION OF OUTSTANDING LIABILITY FROM INDIVIDUAL PAYMENTS DATA Contributed by T S Wright 1. Introduction

More information

Chapter 14 : Statistical Inference 1. Note : Here the 4-th and 5-th editions of the text have different chapters, but the material is the same.

Chapter 14 : Statistical Inference 1. Note : Here the 4-th and 5-th editions of the text have different chapters, but the material is the same. Chapter 14 : Statistical Inference 1 Chapter 14 : Introduction to Statistical Inference Note : Here the 4-th and 5-th editions of the text have different chapters, but the material is the same. Data x

More information

Inferences on Correlation Coefficients of Bivariate Log-normal Distributions

Inferences on Correlation Coefficients of Bivariate Log-normal Distributions Inferences on Correlation Coefficients of Bivariate Log-normal Distributions Guoyi Zhang 1 and Zhongxue Chen 2 Abstract This article considers inference on correlation coefficients of bivariate log-normal

More information

ECON Introductory Econometrics. Lecture 1: Introduction and Review of Statistics

ECON Introductory Econometrics. Lecture 1: Introduction and Review of Statistics ECON4150 - Introductory Econometrics Lecture 1: Introduction and Review of Statistics Monique de Haan (moniqued@econ.uio.no) Stock and Watson Chapter 1-2 Lecture outline 2 What is econometrics? Course

More information