Alternative VaR Models

Size: px
Start display at page:

Transcription

1 Alternative VaR Models Neil Roeth, Senior Risk Developer, TFG Financial Systems. 15 th July 2015 Abstract We describe a variety of VaR models in terms of their key attributes and differences, e.g., parametric vs. nonparametric, historical sampling vs. Monte Carlo simulation. We start with the simpler, well known models and then describe randomized historical simulation and filtered historical simulation, highlighting the features and benefits of these alternative methods. Filtered historical simulation has some unique attributes that could make it a better alternative for managing risk. Introduction Value at Risk (VaR) is a measure of market risk that expresses it as the P th percentile loss in value of a portfolio, i.e., the maximum amount that could be lost with P% probability. Many of the techniques for calculating VaR simulate a number of possible scenarios, each representing a different set of market conditions, and then value a portfolio in each of those scenarios to create a distribution of portfolio gains and losses from which the P th percentile loss is determined. In other words, the distribution of portfolio gains and losses is derived from the distribution of scenarios, so to create a realistic set of gains and losses, one must create a realistic set of scenarios. P is typically 95% or 99%, so for a given distribution, the VaR loss is in the tail of the distribution, which corresponds to relatively rare scenarios. That has implications for the techniques used to calculate VaR. There is a variety of models for generating the scenarios, and the choice of model basically reflects risk management's view of how realistically it can generate a set of scenarios that represents possible future market conditions. All of the models incorporate some information about past behaviour of market data, such as the volatilities of each risk factor and the correlations between different risk factors. It is usually assumed that these statistical measures will stay nearly the same over the relatively short time horizons into the future for which VaR is calculated. It is also assumed that the mean return is zero. The different models have different ways of ensuring that these statistics are reproduced in generated scenarios. Another choice is how to weight historical data. One could take the view that historical changes which occurred in the recent past are no more likely than those which occurred in the distant past (equal weights), or one could take the view that historical changes in the recent past are more likely than those in the distant past (unequal weights). The time horizon for VaR is fairly short, usually one to ten days. This time horizon affects the relative size of the market data changes (longer times imply larger changes). One might expect that VaR

2 2 would be calculated by applying N day market data changes to current market data, then valuing the portfolio N days forward of the current date, but that is not how VaR is typically calculated. Valuing a portfolio at a forward time requires portfolio aging, which is difficult. So, the market data changes are treated as instantaneous changes and the portfolio is valued as of the current date. This is equivalent to assuming that for the short time horizons used in VaR, the portfolio will not change significantly. This might not be a good assumption, e.g., for options near their maturity dates. Some VaR models are based on sampling from a set of historical changes (non-parametric) while others calculate statistical measures of historical changes and use those to parameterize a process that generates possible changes (parametric) and others combine elements of both (semiparametric). Variance/ Covariance This simple parametric model assumes the distribution of portfolio returns is a multivariate normal distribution. The historical data are used to calculate the covariances of returns from which VaR can be calculated without any simulation of scenarios. VaR follows from standard analytic results of normal distributions. This model is very simple and easy to understand. Disadvantage The actual distributions of portfolio returns are not normal, they have fat tails and other differences that significantly impact VaR. Historical Simulation One of the simplest models for VaR uses historical data to determine a set of day over day changes to market data that have actually occurred for some period, then applies each one of those changes to current market data to generate scenarios. The idea is that if a certain set of daily changes occurred over a span of time in the recent past and affected the value of a portfolio, then that is a pretty good representation of the possible one day changes in value of the portfolio that might occur from the current day to the next day. The changes in market data used are exactly the changes that actually occurred in the recent past. The time period over which these changes are calculated is typically one year (for regulatory purposes) or perhaps several years, and the end date in the period is typically the current date. A year has approximately business days, so that is about how many scenarios can be generated and used to create the gain/loss distribution. In statistical terms, that is a very small sample size, especially when considering the tail of the distribution which drives the VaR measures. For example, the 99 th percentile loss is one where only two scenarios will have a larger loss than the VaR value. Equal weight is given to day over day changes in the recent past (near the end date) and to those in the distant past (near the start date). This is equivalent to the view that a change that occurred any

3 3 time in the past is as likely as a change that occurred recently, and also that the volatility of past changes is the same as the volatility of recent changes. The historical simulation model is a simple method of generating scenarios. It is relatively easy to calculate day over day changes in market data and apply them to current market data. All historical mean values, volatilities, and correlations are embedded in the historical data itself and the model includes them without any extra effort. The small number of scenarios means relatively small computational requirements. Disadvantages The number of scenarios is statistically small, so the resulting VaR number has a fair amount of day to day instability associated with it. If a large day over day change occurs, it will tend to drive VaR until it drops out of the time period over which VaR is calculated, e.g., one or more years. In this case, the VaR calculation is more like a stress scenario for that one particular stress than a simulation of many possible scenarios. If it is driving VaR, then even if very little is changing in the market or in the portfolio, the day it drops off can see a big change in VaR, which is not very realistic. Historical changes are assumed to apply regardless of the current values of market data. For example, if the historical data contains a day over day interest rate change at a time when rates were near zero, but current rates are in the double digits, then it might not be realistic to apply that change to current data. For this reason, the historical changes are usually calculated and applied as relative changes rather than absolute changes, but that is about as much as the model can do to handle very different market environments. Also, if current volatility is higher than historical values, then this model could underestimate VaR in the current environment because it effectively assumes that the average volatility over the entire historical period is the current volatility. Historical trends will be reproduced in the simulation, which might not be realistic. For example, if three years of interest rate returns are used, and for the first two of the three years the rates were dropping and then bottomed out near zero in the third year and that is the current situation, then in the simulation two thirds of the scenarios will be trying to push rates down even lower. The one day VaR is often scaled up to D days by simply multiplying the one day VaR by the square root of D. This is equivalent to the approximation that the distribution of portfolio returns is a normal distribution 1, which it is not (as mentioned above). Scaling this way also neglects portfolio aging effects. 1 The distribution of a sum of normally distributed variates is also a normal distribution, and it follows from this property that the P th percentile of the distribution scales by the square root of D. This property is not shared by other distributions.

4 4 Monte Carlo Simulation Monte Carlo VaR is a parametric model. Historical data is used to calculate various statistical measures of the distribution of market data changes, then that measures are used directly or indirectly to calculate parameters that drive a Monte Carlo simulation of scenarios. For example, the covariance of the historical market data changes is used to generate sets of market data shifts that have the same covariance. Monte Carlo simulation can generate arbitrarily large sets of scenarios, including many scenarios which did not actually occur in the historical data set used to calculate the parameters, so the coverage of possible scenarios is good. The calculation of the parameters is a separate step from the simulation in Monte Carlo VaR, so choices like how much history to use for the calculation of the statistical measures in order to get appropriate values for realistic simulation can be made independently of choices like how many scenarios to generate in the simulation to get acceptable bounds on the simulation statistical error. The ability to generate a large number of scenarios means there can be good coverage of possible scenarios and also that the variance of the simulation can be kept within desirable limits. The variance is a measure of how far the result of doing the simulation with a finite number of scenarios is from the result that the simulation would converge to if there was an infinite number of scenarios. The variance decreases as the number of scenarios increases, so you can achieve a certain variance by running the Monte Carlo simulation with enough scenarios. The number of scenarios in the simulation is completely independent of the number of historical scenarios used to calculate the parameters of the Monte Carlo simulation. Choosing the number of scenarios is not possible in historical simulation as described above because the number of historical scenarios is the same as the number of simulated scenarios, but it is possible in Monte Carlo. There are sampling techniques that can be used in Monte Carlo simulation to reduce the variance - antithetic sampling, stratified sampling, etc. - so these are alternatives to increasing the number of scenarios. The parameter data is relatively small compared to the historical data from which it is derived. The parameters themselves are information that can make it easier to see intuitively what could be the driving factors of VaR. For example, the volatility and correlation of the risk factors can be obtained from the covariance matrix, so you can see that risk factor A has twice the volatility of risk factor B and risk factor C is highly correlated with risk factor D. This information is not explicit in a non-parametric model like historical simulation. A single large change in the historical data will factor into the parameters used for the Monte Carlo simulation, but its effect will be muted by the averaging effect of being combined with all the other historical data to derive the parameters. Therefore, it will not have the tendency to drive the VaR number in the way that it would for a historical simulation. Disadvantages There are many ways to parameterize a Monte Carlo simulation based on historical data, so many that choices often must be made to reduce the number. For example, there might be dozens of

5 5 interest rates at different tenors captured for a single interest rate curve, and it would be prohibitively expensive to simulate each and every rate. One way to handle this is to pick a few parameters that capture the bulk of the changes of the entire curve and simulate those. 2 Each of these choices introduces the possibility of failing to capture significant changes and therefore generate an unrealistic VaR value. Calibration and back testing is required in order to ensure that the parameters in a Monte Carlo simulation will generate realistic VaR values. This involves using the parameters in multiple simulations for historical dates and showing that the actual losses over that set of historical dates do not exceed the predicted VaR values from the simulations by more than the expected percentage (e.g., 1% if VaR is calculated at the 99 th percentile). A common assumption is that the distribution of each risk factor return can be simulated as a normal distribution with some appropriate mean, volatility and correlation with other risk factor returns. Historical data is not always consistent with the assumptions inherent in the derivation of the parameters. For example, a common technique is to calculate the historical correlations between risk factors and then generate random samples using a Cholesky decomposition of the correlation matrix. However, this technique can fail when two risk factors are very highly correlated, or when the historical data used to calculate the correlations has gaps due to holidays in certain regions, i.e., where there is no data for that date because it is not a business date. 3 Additional steps have to be taken to handle these numerical issues, and how they are handled imply views that have to be justified from a risk perspective. For example, two highly correlated risk factors could be treated as one independent risk factor and as a second that is a spread to the first. That would solve the numerical issue but would mean a change from modeling the second as an independent risk factor to modeling the spread. Gaps in the data could be handled by assuming that the value on a holiday is the same value as the prior date, or it could be linearly interpolated from the values on surrounding dates. The methods to condense a large set of parameters into a few (as mentioned above) might have to be fairly sophisticated. For example, one might use principal component analysis (PCA) to determine a small set of factors that capture the bulk of the changes in the historical data. This complexity means additional sources of modeling error and can make it more difficult to understand intuitively the meaning of the factors affecting the simulation. Recall that VaR is a measure of the tail of the portfolio gain/loss distribution. That means that in order to calculate a reasonably accurate VaR, you need to generate a reasonable number of scenarios in the tail of the simulated distribution, which in turn means you need to generate a lot of scenarios in the rest of the distribution. For 99% VaR, that means that for every generated scenario that leads to a loss above the 99% level, there will have to be 99 others below it. So, while it is possible to generate as many scenarios as required to reach a particular level of accuracy, it can be very expensive. 2 Techniques such as Principal Component Analysis can be used to determine the most appropriate parameters. 3 This is not a rare occurrence when multiple markets are involved. I once did an empirical analysis of a couple dozen markets with different holidays and found that there were only about sixty business days per year that were not a holiday in any of the markets.

6 6 Randomised Historical Simulation RiskMetrics published a paper in 1997 about how to combine T historical return vectors with random Gaussians for 1 day VaR.[RMM 97] Even though they refer to it as a Monte Carlo model, it is fundamentally a historical sampling model rather than a parametric model. The authors show that if historical returns are multiplied by uncorrelated, normally distributed random variates, then the generated scenarios will have the same volatilities and correlations as the historical returns. However, the introduction of the random variates means that the number of generated scenarios can be larger than the number of historical returns, unlike a pure historical simulation as described above. This leads to an improvement in the variance of the simulation without the need to explicitly calculate or decompose a correlation matrix. This model has nearly the simplicity of simple historical simulation but can generate a large number of varying scenarios like the Monte Carlo simulation described above, so it combines the advantages of both and eliminates some of the disadvantages of each. The original model described in the paper creates each simulated return vector as a linear combination of all of the historical returns. This would require quite a few random variates as well as a fair amount of multiplying of numbers (with 260 days of history and 1000 returns per date, then to generate 5000 scenarios would require random variates and multiplications). However, as they briefly mention, it is not really necessary to combine all historical returns for every scenario to achieve the benefits of this model. In fact, for each scenario, you can pick just one historical date and multiply its returns by one normally distributed random variate. The volatilities and correlations of the generated scenarios will still work out the same. For the same example, that means 5000 random variates and multiplications. Randomized historical simulation reproduces historical volatilities and correlations automatically, like the simpler historical simulation described above. This model can generate a large number of different scenarios very easily. This is good for reducing the variance in the simulation for more stable day to day VaR (like Monte Carlo). The large number of scenarios also avoids the excessive impact of one large historical change. Because it is a historical sampling technique combined with just independent, normally distributed random variates, it avoids numerical issues related to parameterization and calibration. The mean of the randomized historical returns will be zero even if there is a historical trend (i.e., nonzero mean). The normally distributed random variates have zero mean which makes the simulated returns have zero mean so there is no tendency to bias the simulated returns in the same direction as the historical returns. However, see the section on nonzero means below. Disadvantages A historical date with a large gain can result in a simulated scenario with a large loss because each historical return is multiplied by a normally distributed random variate, and negative variates are as likely as positive variates. This is mitigated somewhat by the fact that the large number of scenarios will tend to smooth out the effects of large historical returns, whether negative or positive.

7 7 The distribution of the generated returns will be a normal distribution even if the distribution of the historical returns is not normal. Weighted Historical Simulation All of the VaR models described so far have applied equal weights to historical returns. A risk manager could take the view that market data changes in a VaR calculation are more likely to be similar to changes in the recent past than in the distant past. A VaR calculation can take this into account by weighting the more recent historical returns more than the more distant historical returns. For the historical sampling techniques that involve randomly choosing a historical date for each scenario, equal weighting corresponds to choosing a random number from a uniform distribution. Unequal weighting corresponds to choosing a random number from some other distribution, e.g., exponential, which is straightforward. For Monte Carlo, the historical data used to calculate the parameters can be weighted. For example, when calculating means and covariances of historical returns, varying weights can be assigned and standard formulae for weighted means and weighted covariances can be used. Nonzero Historical Means For the various models based on historical returns, there can be historical trends in the data, i.e. nonzero means. If desired, these can be removed in the simulation. There are a couple of ways of doing this. The historical returns can be mirrored by adding the negative of each historical return to the sample. This will double the number of samples (which is good) as well as making the mean zero, but it will also change the form of the resulting distribution of returns. For example, if the distribution of the historical returns happens to be a normal distribution with a nonzero mean, then the distribution of the historical returns plus their mirrors will not be a normal distribution (though it will be close if the mean is small relative to the variance). The covariance of the RiskMetrics randomized historical returns described above will not match the covariance of the historical returns if there is a nonzero mean. If the mean of risk factor n is μ n and the mean of risk factor m is μ m and their covariance is V nm, then the covariance of the simulated returns will be V nm + μ nμ m, which could be larger or smaller than the covariance of the historical returns depending on whether the two means have the same sign or not. The diagonal terms of the covariance (where n=m) are the volatilities of the risk factors, so this implies that the volatility of the simulated returns of risk factor n will be V nn + μ n 2 ; it will always be larger than the volatility of the historical returns of the same risk factor. 4 The historical returns can be normalized to have a zero mean by just subtracting the mean value from each return. This will preserve the sample variance. This can be done before doing simple historical simulation, mirrored historical simulation or the randomized historical simulation to preserve the covariances in all cases. 4 The RiskMetrics paper only considers the case where the means of the historical means are zero. This result in the case of nonzero means was derived at TFG.

8 8 Filtered Historical Simulation Filtered Historical Simulation (FHS) attempts to combine the benefits of parametric and nonparametric models into a semiparametric model[fhs]. It uses sampling of historical returns to get the proper correlations and distribution of simulated returns, but replaces the historical volatilities with volatilities obtained from current market data. The issue it addresses is that unfiltered historical simulation tends to underestimate VaR in high volatility periods and overestimate VaR in low volatility periods because it effectively uses a constant volatility that is the average over the whole period. By replacing the constant, average volatility with one taken from current market data, VaR will be higher in high volatility periods and lower in low volatility periods. FHS is also used for N day VaR by combining N one day returns. At each step, the forward volatility obtained from current market data is used, so it effectively has a time dependent parameterization of the volatility. This is not possible with nonparametric models. FHS can be viewed as a weighting model where the recent data for volatility (the current market conditions) are weighted 100% while the past data for volatility are weighted 0%. The focus of FHS seems to be on the volatility of the returns but little mention is made of the mean. If the historical returns have a nonzero mean, then FHS will reproduce that trend if the returns are not normalized. If they are normalized, then the simulated returns will have a mean of zero, which also might not be realistic. The idea of scaling the volatility as described above can be generalized to the idea that current market data can be used for many different types of risk factors to allow including current market data information about future expectations into a simulation. The general process is to normalize historical data and then scale it to current values. At TFG we have shown that this technique can be used to make the mean of simulated interest rate returns match the current curve while preserving the historical correlations. FHS provides the benefits of historical sampling, i.e., the automatic inclusion of correlations and shape of the historical distribution without any explicit assumptions about either. FHS allows parameterizing important parts of the simulation without requiring parameterizing everything (as in the pure Monte Carlo simulations). As a parametric model, FHS could be extended to incorporate additional information beyond the historical returns that could be important for VaR. As described above, it captures current market information about implied volatilities, but other data could be included as well, like autocorrelation (correlation of day over day changes). Disadvantages FHS assumes that the volatility of the returns is independent of the correlations of those same returns, which is not necessarily true. This issue carries over from unfiltered historical simulation, where all of the historical statistical properties, including both correlations and volatility, are assumed to have the same values in the current environment, so it is no worse in that regard. FHS requires a slight increase in complexity of the simulation compared to unfiltered historical simulation.

9 9 FHS requires current market data for each risk factor from which it can determine the volatility that it will use to scale the normalized historical volatilities. Summary We described several VaR models: Variance/Covariance, Historical Simulation, Monte Carlo, Randomized Historical Simulation and Filtered Historical Simulation. These fall into the categories of parametric, semiparametric and nonparametric. and disadvantages of each model were highlighted. The nonparametric, historical sampling models are generally easier to implement than the parametric Monte Carlo model and avoid the calibration process, but a major issue is that they tend to reproduce historical trends that may not be desirable and the simpler models do not generate many scenarios, which can lead to a fair amount of daily variance in VaR. Filtered historical simulation attempts to combine the benefits of the nonparametric models with the benefits of the parametric models while avoiding the major issues of both. FHS could be a better tool for managing risk than the standard models (often implemented because they are required for regulatory purposes), particularly if portfolio aging could be included to allow for longer term simulations. Bibliography [RMM 97] Peter Benson and Peter Zangari. Morgan Guaranty Trust Company, Risk Management Research. A general approach to calculating VaR without volatilities and correlations. [FHS] Giovanni Barone-Adesi and Kostas Giannopoulos. Economic Notes by Banca Monte dei Paschi di Siena SpA. Vol 30. Issue pp Non-parametric VaR Techniques. Myths and Realities.

Bloomberg. Portfolio Value-at-Risk. Sridhar Gollamudi & Bryan Weber. September 22, Version 1.0

Portfolio Value-at-Risk Sridhar Gollamudi & Bryan Weber September 22, 2011 Version 1.0 Table of Contents 1 Portfolio Value-at-Risk 2 2 Fundamental Factor Models 3 3 Valuation methodology 5 3.1 Linear factor

A general approach to calculating VaR without volatilities and correlations

page 19 A general approach to calculating VaR without volatilities and correlations Peter Benson * Peter Zangari Morgan Guaranty rust Company Risk Management Research (1-212) 648-8641 zangari_peter@jpmorgan.com

Linda Allen, Jacob Boudoukh and Anthony Saunders, Understanding Market, Credit and Operational Risk: The Value at Risk Approach

P1.T4. Valuation & Risk Models Linda Allen, Jacob Boudoukh and Anthony Saunders, Understanding Market, Credit and Operational Risk: The Value at Risk Approach Bionic Turtle FRM Study Notes Reading 26 By

RISKMETRICS. Dr Philip Symes

1 RISKMETRICS Dr Philip Symes 1. Introduction 2 RiskMetrics is JP Morgan's risk management methodology. It was released in 1994 This was to standardise risk analysis in the industry. Scenarios are generated

Introduction to Algorithmic Trading Strategies Lecture 8

Introduction to Algorithmic Trading Strategies Lecture 8 Risk Management Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com Outline Value at Risk (VaR) Extreme Value Theory (EVT) References

Value at Risk Ch.12. PAK Study Manual

Value at Risk Ch.12 Related Learning Objectives 3a) Apply and construct risk metrics to quantify major types of risk exposure such as market risk, credit risk, liquidity risk, regulatory risk etc., and

Practical example of an Economic Scenario Generator

Practical example of an Economic Scenario Generator Martin Schenk Actuarial & Insurance Solutions SAV 7 March 2014 Agenda Introduction Deterministic vs. stochastic approach Mathematical model Application

Non-parametric VaR Techniques. Myths and Realities

Economic Notes by Banca Monte dei Paschi di Siena SpA, vol. 30, no. 2-2001, pp. 167±181 Non-parametric VaR Techniques. Myths and Realities GIOVANNI BARONE-ADESI -KOSTAS GIANNOPOULOS VaR (value-at-risk)

Brooks, Introductory Econometrics for Finance, 3rd Edition

P1.T2. Quantitative Analysis Brooks, Introductory Econometrics for Finance, 3rd Edition Bionic Turtle FRM Study Notes Sample By David Harper, CFA FRM CIPM and Deepa Raju www.bionicturtle.com Chris Brooks,

Market Risk Analysis Volume IV. Value-at-Risk Models

Market Risk Analysis Volume IV Value-at-Risk Models Carol Alexander John Wiley & Sons, Ltd List of Figures List of Tables List of Examples Foreword Preface to Volume IV xiii xvi xxi xxv xxix IV.l Value

Introduction Dickey-Fuller Test Option Pricing Bootstrapping. Simulation Methods. Chapter 13 of Chris Brook s Book.

Simulation Methods Chapter 13 of Chris Brook s Book Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 April 26, 2017 Christopher

Comparative analysis and estimation of mathematical methods of market risk valuation in application to Russian stock market.

Comparative analysis and estimation of mathematical methods of market risk valuation in application to Russian stock market. Andrey M. Boyarshinov Rapid development of risk management as a new kind of

Lecture outline. Monte Carlo Methods for Uncertainty Quantification. Importance Sampling. Importance Sampling

Lecture outline Monte Carlo Methods for Uncertainty Quantification Mike Giles Mathematical Institute, University of Oxford KU Leuven Summer School on Uncertainty Quantification Lecture 2: Variance reduction

IEOR E4602: Quantitative Risk Management

IEOR E4602: Quantitative Risk Management Basic Concepts and Techniques of Risk Management Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com

MEASURING TRADED MARKET RISK: VALUE-AT-RISK AND BACKTESTING TECHNIQUES

MEASURING TRADED MARKET RISK: VALUE-AT-RISK AND BACKTESTING TECHNIQUES Colleen Cassidy and Marianne Gizycki Research Discussion Paper 9708 November 1997 Bank Supervision Department Reserve Bank of Australia

Risk Measurement: An Introduction to Value at Risk

Risk Measurement: An Introduction to Value at Risk Thomas J. Linsmeier and Neil D. Pearson * University of Illinois at Urbana-Champaign July 1996 Abstract This paper is a self-contained introduction to

Modelling Returns: the CER and the CAPM

Modelling Returns: the CER and the CAPM Carlo Favero Favero () Modelling Returns: the CER and the CAPM 1 / 20 Econometric Modelling of Financial Returns Financial data are mostly observational data: they

Calculating VaR. There are several approaches for calculating the Value at Risk figure. The most popular are the

VaR Pro and Contra Pro: Easy to calculate and to understand. It is a common language of communication within the organizations as well as outside (e.g. regulators, auditors, shareholders). It is not really

CHAPTER II LITERATURE STUDY

CHAPTER II LITERATURE STUDY 2.1. Risk Management Monetary crisis that strike Indonesia during 1998 and 1999 has caused bad impact to numerous government s and commercial s bank. Most of those banks eventually

On the Use of Stock Index Returns from Economic Scenario Generators in ERM Modeling

On the Use of Stock Index Returns from Economic Scenario Generators in ERM Modeling Michael G. Wacek, FCAS, CERA, MAAA Abstract The modeling of insurance company enterprise risks requires correlated forecasts

Overview. We will discuss the nature of market risk and appropriate measures

Market Risk Overview We will discuss the nature of market risk and appropriate measures RiskMetrics Historic (back stimulation) approach Monte Carlo simulation approach Link between market risk and required

Market Risk: FROM VALUE AT RISK TO STRESS TESTING. Agenda. Agenda (Cont.) Traditional Measures of Market Risk

Market Risk: FROM VALUE AT RISK TO STRESS TESTING Agenda The Notional Amount Approach Price Sensitivity Measure for Derivatives Weakness of the Greek Measure Define Value at Risk 1 Day to VaR to 10 Day

The misleading nature of correlations In this note we explain certain subtle features of calculating correlations between time-series. Correlation is a measure of linear co-movement, to be contrasted with

Week 7 Quantitative Analysis of Financial Markets Simulation Methods

Week 7 Quantitative Analysis of Financial Markets Simulation Methods Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November

Section B: Risk Measures. Value-at-Risk, Jorion

Section B: Risk Measures Value-at-Risk, Jorion One thing to always keep in mind when reading this text is that it is focused on the banking industry. It mainly focuses on market and credit risk. It also

Risk Decomposition for Portfolio Simulations

Risk Decomposition for Portfolio Simulations Marco Marchioro www.statpro.com Version 1.0 April 2010 Abstract We describe a method to compute the decomposition of portfolio risk in additive asset components

Monte Carlo Methods in Structuring and Derivatives Pricing

Monte Carlo Methods in Structuring and Derivatives Pricing Prof. Manuela Pedio (guest) 20263 Advanced Tools for Risk Management and Pricing Spring 2017 Outline and objectives The basic Monte Carlo algorithm

Modelling economic scenarios for IFRS 9 impairment calculations. Keith Church 4most (Europe) Ltd AUGUST 2017

Modelling economic scenarios for IFRS 9 impairment calculations Keith Church 4most (Europe) Ltd AUGUST 2017 Contents Introduction The economic model Building a scenario Results Conclusions Introduction

Modelling the Sharpe ratio for investment strategies

Modelling the Sharpe ratio for investment strategies Group 6 Sako Arts 0776148 Rik Coenders 0777004 Stefan Luijten 0783116 Ivo van Heck 0775551 Rik Hagelaars 0789883 Stephan van Driel 0858182 Ellen Cardinaels

Modeling credit risk in an in-house Monte Carlo simulation

Modeling credit risk in an in-house Monte Carlo simulation Wolfgang Gehlen Head of Risk Methodology BIS Risk Control Beatenberg, 4 September 2003 Presentation overview I. Why model credit losses in a simulation?

Publication date: 12-Nov-2001 Reprinted from RatingsDirect Commentary CDO Evaluator Applies Correlation and Monte Carlo Simulation to the Art of Determining Portfolio Quality Analyst: Sten Bergman, New

Operational Risk Aggregation

Operational Risk Aggregation Professor Carol Alexander Chair of Risk Management and Director of Research, ISMA Centre, University of Reading, UK. Loss model approaches are currently a focus of operational

MEASURING PORTFOLIO RISKS USING CONDITIONAL COPULA-AR-GARCH MODEL

MEASURING PORTFOLIO RISKS USING CONDITIONAL COPULA-AR-GARCH MODEL Isariya Suttakulpiboon MSc in Risk Management and Insurance Georgia State University, 30303 Atlanta, Georgia Email: suttakul.i@gmail.com,

The risk/return trade-off has been a

Efficient Risk/Return Frontiers for Credit Risk HELMUT MAUSSER AND DAN ROSEN HELMUT MAUSSER is a mathematician at Algorithmics Inc. in Toronto, Canada. DAN ROSEN is the director of research at Algorithmics

Statistical Models and Methods for Financial Markets

Tze Leung Lai/ Haipeng Xing Statistical Models and Methods for Financial Markets B 374756 4Q Springer Preface \ vii Part I Basic Statistical Methods and Financial Applications 1 Linear Regression Models

THE UNIVERSITY OF TEXAS AT AUSTIN Department of Information, Risk, and Operations Management

THE UNIVERSITY OF TEXAS AT AUSTIN Department of Information, Risk, and Operations Management BA 386T Tom Shively PROBABILITY CONCEPTS AND NORMAL DISTRIBUTIONS The fundamental idea underlying any statistical

King s College London

King s College London University Of London This paper is part of an examination of the College counting towards the award of a degree. Examinations are governed by the College Regulations under the authority

KERNEL PROBABILITY DENSITY ESTIMATION METHODS

5.- KERNEL PROBABILITY DENSITY ESTIMATION METHODS S. Towers State University of New York at Stony Brook Abstract Kernel Probability Density Estimation techniques are fast growing in popularity in the particle

Credit Exposure Measurement Fixed Income & FX Derivatives

1 Credit Exposure Measurement Fixed Income & FX Derivatives Dr Philip Symes 1. Introduction 2 Fixed Income Derivatives Exposure Simulation. This methodology may be used for fixed income and FX derivatives.

Measuring and managing market risk June 2003

Page 1 of 8 Measuring and managing market risk June 2003 Investment management is largely concerned with risk management. In the management of the Petroleum Fund, considerable emphasis is therefore placed

Bias Reduction Using the Bootstrap

Bias Reduction Using the Bootstrap Find f t (i.e., t) so that or E(f t (P, P n ) P) = 0 E(T(P n ) θ(p) + t P) = 0. Change the problem to the sample: whose solution is so the bias-reduced estimate is E(T(P

Three Components of a Premium The simple pricing approach outlined in this module is the Return-on-Risk methodology. The sections in the first part of the module describe the three components of a premium

Monte Carlo Methods in Financial Engineering

Paul Glassennan Monte Carlo Methods in Financial Engineering With 99 Figures

Correlation Structures Corresponding to Forward Rates

Chapter 6 Correlation Structures Corresponding to Forward Rates Ilona Kletskin 1, Seung Youn Lee 2, Hua Li 3, Mingfei Li 4, Rongsong Liu 5, Carlos Tolmasky 6, Yujun Wu 7 Report prepared by Seung Youn Lee

Operational Risk Modeling

Operational Risk Modeling RMA Training (part 2) March 213 Presented by Nikolay Hovhannisyan Nikolay_hovhannisyan@mckinsey.com OH - 1 About the Speaker Senior Expert McKinsey & Co Implemented Operational

Market Risk Analysis Volume II. Practical Financial Econometrics

Market Risk Analysis Volume II Practical Financial Econometrics Carol Alexander John Wiley & Sons, Ltd List of Figures List of Tables List of Examples Foreword Preface to Volume II xiii xvii xx xxii xxvi

GN47: Stochastic Modelling of Economic Risks in Life Insurance

GN47: Stochastic Modelling of Economic Risks in Life Insurance Classification Recommended Practice MEMBERS ARE REMINDED THAT THEY MUST ALWAYS COMPLY WITH THE PROFESSIONAL CONDUCT STANDARDS (PCS) AND THAT

King s College London

King s College London University Of London This paper is part of an examination of the College counting towards the award of a degree. Examinations are governed by the College Regulations under the authority

Chapter 8 Statistical Intervals for a Single Sample

Chapter 8 Statistical Intervals for a Single Sample Part 1: Confidence intervals (CI) for population mean µ Section 8-1: CI for µ when σ 2 known & drawing from normal distribution Section 8-1.2: Sample

[D7] PROBABILITY DISTRIBUTION OF OUTSTANDING LIABILITY FROM INDIVIDUAL PAYMENTS DATA Contributed by T S Wright

Faculty and Institute of Actuaries Claims Reserving Manual v.2 (09/1997) Section D7 [D7] PROBABILITY DISTRIBUTION OF OUTSTANDING LIABILITY FROM INDIVIDUAL PAYMENTS DATA Contributed by T S Wright 1. Introduction

CVA Capital Charges: A comparative analysis. November SOLUM FINANCIAL financial.com

CVA Capital Charges: A comparative analysis November 2012 SOLUM FINANCIAL www.solum financial.com Introduction The aftermath of the global financial crisis has led to much stricter regulation and capital

Financial Risk Management and Governance Other VaR methods. Prof. Hugues Pirotte

Financial Risk Management and Governance Other VaR methods Prof. ugues Pirotte Idea of historical simulations Why rely on statistics and hypothetical distribution?» Use the effective past distribution

Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty

Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty George Photiou Lincoln College University of Oxford A dissertation submitted in partial fulfilment for

Lecture 1: The Econometrics of Financial Returns

Lecture 1: The Econometrics of Financial Returns Prof. Massimo Guidolin 20192 Financial Econometrics Winter/Spring 2016 Overview General goals of the course and definition of risk(s) Predicting asset returns:

Operational Risk Aggregation

Operational Risk Aggregation Professor Carol Alexander Chair of Risk Management and Director of Research, ISMA Centre, University of Reading, UK. Loss model approaches are currently a focus of operational

2. Criteria for a Good Profitability Target

Setting Profitability Targets by Colin Priest BEc FIAA 1. Introduction This paper discusses the effectiveness of some common profitability target measures. In particular I have attempted to create a model

Measurement of Market Risk

Measurement of Market Risk Market Risk Directional risk Relative value risk Price risk Liquidity risk Type of measurements scenario analysis statistical analysis Scenario Analysis A scenario analysis measures

Biostatistics and Design of Experiments Prof. Mukesh Doble Department of Biotechnology Indian Institute of Technology, Madras

Biostatistics and Design of Experiments Prof. Mukesh Doble Department of Biotechnology Indian Institute of Technology, Madras Lecture - 05 Normal Distribution So far we have looked at discrete distributions

Market Risk VaR: Model- Building Approach. Chapter 15

Market Risk VaR: Model- Building Approach Chapter 15 Risk Management and Financial Institutions 3e, Chapter 15, Copyright John C. Hull 01 1 The Model-Building Approach The main alternative to historical

A gentle introduction to the RM 2006 methodology

A gentle introduction to the RM 2006 methodology Gilles Zumbach RiskMetrics Group Av. des Morgines 12 1213 Petit-Lancy Geneva, Switzerland gilles.zumbach@riskmetrics.com Initial version: August 2006 This

Annual risk measures and related statistics

Annual risk measures and related statistics Arno E. Weber, CIPM Applied paper No. 2017-01 August 2017 Annual risk measures and related statistics Arno E. Weber, CIPM 1,2 Applied paper No. 2017-01 August

Accelerated Option Pricing Multiple Scenarios

Accelerated Option Pricing in Multiple Scenarios 04.07.2008 Stefan Dirnstorfer (stefan@thetaris.com) Andreas J. Grau (grau@thetaris.com) 1 Abstract This paper covers a massive acceleration of Monte-Carlo

Energy Price Processes

Energy Processes Used for Derivatives Pricing & Risk Management In this first of three articles, we will describe the most commonly used process, Geometric Brownian Motion, and in the second and third

Risk Measuring of Chosen Stocks of the Prague Stock Exchange

Risk Measuring of Chosen Stocks of the Prague Stock Exchange Ing. Mgr. Radim Gottwald, Department of Finance, Faculty of Business and Economics, Mendelu University in Brno, radim.gottwald@mendelu.cz Abstract

Algorithmic Trading Session 12 Performance Analysis III Trade Frequency and Optimal Leverage. Oliver Steinki, CFA, FRM

Algorithmic Trading Session 12 Performance Analysis III Trade Frequency and Optimal Leverage Oliver Steinki, CFA, FRM Outline Introduction Trade Frequency Optimal Leverage Summary and Questions Sources

PRE CONFERENCE WORKSHOP 3

PRE CONFERENCE WORKSHOP 3 Stress testing operational risk for capital planning and capital adequacy PART 2: Monday, March 18th, 2013, New York Presenter: Alexander Cavallo, NORTHERN TRUST 1 Disclaimer

AP Statistics Chapter 6 - Random Variables

AP Statistics Chapter 6 - Random 6.1 Discrete and Continuous Random Objective: Recognize and define discrete random variables, and construct a probability distribution table and a probability histogram

Maturity as a factor for credit risk capital

Maturity as a factor for credit risk capital Michael Kalkbrener Λ, Ludger Overbeck y Deutsche Bank AG, Corporate & Investment Bank, Credit Risk Management 1 Introduction 1.1 Quantification of maturity

Online Appendix for Variable Rare Disasters: An Exactly Solved Framework for Ten Puzzles in Macro-Finance. Theory Complements

Online Appendix for Variable Rare Disasters: An Exactly Solved Framework for Ten Puzzles in Macro-Finance Xavier Gabaix November 4 011 This online appendix contains some complements to the paper: extension

Notes on: J. David Cummins, Allocation of Capital in the Insurance Industry Risk Management and Insurance Review, 3, 2000, pp

Notes on: J. David Cummins Allocation of Capital in the Insurance Industry Risk Management and Insurance Review 3 2000 pp. 7-27. This reading addresses the standard management problem of allocating capital

Strategies for Improving the Efficiency of Monte-Carlo Methods

Strategies for Improving the Efficiency of Monte-Carlo Methods Paul J. Atzberger General comments or corrections should be sent to: paulatz@cims.nyu.edu Introduction The Monte-Carlo method is a useful

Using Fractals to Improve Currency Risk Management Strategies

Using Fractals to Improve Currency Risk Management Strategies Michael K. Lauren Operational Analysis Section Defence Technology Agency New Zealand m.lauren@dta.mil.nz Dr_Michael_Lauren@hotmail.com Abstract

Economic Capital. Implementing an Internal Model for. Economic Capital ACTUARIAL SERVICES

Economic Capital Implementing an Internal Model for Economic Capital ACTUARIAL SERVICES ABOUT THIS DOCUMENT THIS IS A WHITE PAPER This document belongs to the white paper series authored by Numerica. It

Introduction to Risk Management

Introduction to Risk Management ACPM Certified Portfolio Management Program c 2010 by Martin Haugh Introduction to Risk Management We introduce some of the basic concepts and techniques of risk management

Financial Risk Measurement/Management

550.446 Financial Risk Measurement/Management Week of September 23, 2013 Interest Rate Risk & Value at Risk (VaR) 3.1 Where we are Last week: Introduction continued; Insurance company and Investment company

MLLunsford 1. Activity: Central Limit Theorem Theory and Computations

MLLunsford 1 Activity: Central Limit Theorem Theory and Computations Concepts: The Central Limit Theorem; computations using the Central Limit Theorem. Prerequisites: The student should be familiar with

THE IMPLEMENTATION OF VALUE AT RISK (VaR) IN ISRAEL S BANKING SYSTEM

THE IMPLEMENTATION OF VALUE AT RISKBank of Israel Banking Review No. 7 (1999), 61 87 THE IMPLEMENTATION OF VALUE AT RISK (VaR) IN ISRAEL S BANKING SYSTEM BEN Z. SCHREIBER, * ZVI WIENER, ** AND DAVID ZAKEN

P2.T5. Market Risk Measurement & Management. Bruce Tuckman, Fixed Income Securities, 3rd Edition

P2.T5. Market Risk Measurement & Management Bruce Tuckman, Fixed Income Securities, 3rd Edition Bionic Turtle FRM Study Notes Reading 40 By David Harper, CFA FRM CIPM www.bionicturtle.com TUCKMAN, CHAPTER

The Constant Expected Return Model

Chapter 1 The Constant Expected Return Model Date: February 5, 2015 The first model of asset returns we consider is the very simple constant expected return (CER) model. This model is motivated by the

I. Return Calculations (20 pts, 4 points each)

University of Washington Winter 015 Department of Economics Eric Zivot Econ 44 Midterm Exam Solutions This is a closed book and closed note exam. However, you are allowed one page of notes (8.5 by 11 or

A Simplified Approach to the Conditional Estimation of Value at Risk (VAR)

A Simplified Approach to the Conditional Estimation of Value at Risk (VAR) by Giovanni Barone-Adesi(*) Faculty of Business University of Alberta and Center for Mathematical Trading and Finance, City University

A Correlated Sampling Method for Multivariate Normal and Log-normal Distributions

A Correlated Sampling Method for Multivariate Normal and Log-normal Distributions Gašper Žerovni, Andrej Trov, Ivan A. Kodeli Jožef Stefan Institute Jamova cesta 39, SI-000 Ljubljana, Slovenia gasper.zerovni@ijs.si,

Predicting the Success of a Retirement Plan Based on Early Performance of Investments

Predicting the Success of a Retirement Plan Based on Early Performance of Investments CS229 Autumn 2010 Final Project Darrell Cain, AJ Minich Abstract Using historical data on the stock market, it is possible

Implementing Models in Quantitative Finance: Methods and Cases

Gianluca Fusai Andrea Roncoroni Implementing Models in Quantitative Finance: Methods and Cases vl Springer Contents Introduction xv Parti Methods 1 Static Monte Carlo 3 1.1 Motivation and Issues 3 1.1.1

VaR Introduction I: Parametric VaR

VaR Introduction I: Parametric VaR Tom Mills FinPricing http://www.finpricing.com VaR Definition VaR Roles VaR Pros and Cons VaR Approaches Parametric VaR Parametric VaR Methodology Parametric VaR Implementation

The Two-Sample Independent Sample t Test

Department of Psychology and Human Development Vanderbilt University 1 Introduction 2 3 The General Formula The Equal-n Formula 4 5 6 Independence Normality Homogeneity of Variances 7 Non-Normality Unequal

MONTE CARLO EXTENSIONS

MONTE CARLO EXTENSIONS School of Mathematics 2013 OUTLINE 1 REVIEW OUTLINE 1 REVIEW 2 EXTENSION TO MONTE CARLO OUTLINE 1 REVIEW 2 EXTENSION TO MONTE CARLO 3 SUMMARY MONTE CARLO SO FAR... Simple to program

Financial Engineering. Craig Pirrong Spring, 2006

Financial Engineering Craig Pirrong Spring, 2006 March 8, 2006 1 Levy Processes Geometric Brownian Motion is very tractible, and captures some salient features of speculative price dynamics, but it is

ERM Sample Study Manual

ERM Sample Study Manual You have downloaded a sample of our ERM detailed study manual. The full version covers the entire syllabus and is included with the online seminar. Each portion of the detailed

Empirical Distribution Testing of Economic Scenario Generators

1/27 Empirical Distribution Testing of Economic Scenario Generators Gary Venter University of New South Wales 2/27 STATISTICAL CONCEPTUAL BACKGROUND "All models are wrong but some are useful"; George Box

Computational Finance Improving Monte Carlo

Computational Finance Improving Monte Carlo School of Mathematics 2018 Monte Carlo so far... Simple to program and to understand Convergence is slow, extrapolation impossible. Forward looking method ideal

Stochastic Analysis Of Long Term Multiple-Decrement Contracts

Stochastic Analysis Of Long Term Multiple-Decrement Contracts Matthew Clark, FSA, MAAA and Chad Runchey, FSA, MAAA Ernst & Young LLP January 2008 Table of Contents Executive Summary...3 Introduction...6

Random Variables and Probability Distributions

Chapter 3 Random Variables and Probability Distributions Chapter Three Random Variables and Probability Distributions 3. Introduction An event is defined as the possible outcome of an experiment. In engineering

ROM SIMULATION Exact Moment Simulation using Random Orthogonal Matrices

ROM SIMULATION Exact Moment Simulation using Random Orthogonal Matrices Bachelier Finance Society Meeting Toronto 2010 Henley Business School at Reading Contact Author : d.ledermann@icmacentre.ac.uk Alexander

Monte Carlo Methods for Uncertainty Quantification

Monte Carlo Methods for Uncertainty Quantification Abdul-Lateef Haji-Ali Based on slides by: Mike Giles Mathematical Institute, University of Oxford Contemporary Numerical Techniques Haji-Ali (Oxford)

Clark. Outside of a few technical sections, this is a very process-oriented paper. Practice problems are key!

Opening Thoughts Outside of a few technical sections, this is a very process-oriented paper. Practice problems are key! Outline I. Introduction Objectives in creating a formal model of loss reserving:

1.1 Interest rates Time value of money

Lecture 1 Pre- Derivatives Basics Stocks and bonds are referred to as underlying basic assets in financial markets. Nowadays, more and more derivatives are constructed and traded whose payoffs depend on

Sampling and sampling distribution

Sampling and sampling distribution September 12, 2017 STAT 101 Class 5 Slide 1 Outline of Topics 1 Sampling 2 Sampling distribution of a mean 3 Sampling distribution of a proportion STAT 101 Class 5 Slide