A general approach to calculating VaR without volatilities and correlations

Similar documents
Alternative VaR Models

Econ 424/CFRM 462 Portfolio Risk Budgeting

RISKMETRICS. Dr Philip Symes

Overview. We will discuss the nature of market risk and appropriate measures

2.1 Mathematical Basis: Risk-Neutral Pricing

Design of a Financial Application Driven Multivariate Gaussian Random Number Generator for an FPGA

ROM Simulation with Exact Means, Covariances, and Multivariate Skewness

Modelling the Sharpe ratio for investment strategies

Monte Carlo Methods for Uncertainty Quantification

Portfolio Credit Risk II

Market Risk VaR: Model- Building Approach. Chapter 15

The most general methodology to create a valid correlation matrix for risk management and option pricing purposes

Financial Mathematics III Theory summary

King s College London

Lecture outline. Monte Carlo Methods for Uncertainty Quantification. Importance Sampling. Importance Sampling

ELEMENTS OF MATRIX MATHEMATICS

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29

Maturity as a factor for credit risk capital

The Delta Method. j =.

ROM SIMULATION Exact Moment Simulation using Random Orthogonal Matrices

The Constant Expected Return Model

Beyond VaR: Triangular Risk Decomposition

As we saw in Chapter 12, one of the many uses of Monte Carlo simulation by

Advanced Financial Modeling. Unit 2

Collective Defined Contribution Plan Contest Model Overview

Lecture IV Portfolio management: Efficient portfolios. Introduction to Finance Mathematics Fall Financial mathematics

Introduction to Algorithmic Trading Strategies Lecture 8

No-Arbitrage ROM Simulation

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2017, Mr. Ruey S. Tsay. Solutions to Final Exam

Publication date: 12-Nov-2001 Reprinted from RatingsDirect

Comparative analysis and estimation of mathematical methods of market risk valuation in application to Russian stock market.

MONTE CARLO EXTENSIONS

Risk Decomposition for Portfolio Simulations

Annual risk measures and related statistics

Monte Carlo Methods in Structuring and Derivatives Pricing

King s College London

Modelling Returns: the CER and the CAPM

John Hull, Risk Management and Financial Institutions, 4th Edition

An Approach for Comparison of Methodologies for Estimation of the Financial Risk of a Bond, Using the Bootstrapping Method

THE IMPLEMENTATION OF VALUE AT RISK (VaR) IN ISRAEL S BANKING SYSTEM

Mathematics in Finance

Credit Exposure Measurement Fixed Income & FX Derivatives

Computer Exercise 2 Simulation

Oracle Financial Services Market Risk User Guide

Lecture 2 Dynamic Equilibrium Models: Three and More (Finite) Periods

Preprint: Will be published in Perm Winter School Financial Econometrics and Empirical Market Microstructure, Springer

Exercise List: Proving convergence of the (Stochastic) Gradient Descent Method for the Least Squares Problem.

IEOR E4602: Quantitative Risk Management

Risk Measurement: An Introduction to Value at Risk

ELEMENTS OF MONTE CARLO SIMULATION

Quasi-Monte Carlo for Finance

DRAFT. 1 exercise in state (S, t), π(s, t) = 0 do not exercise in state (S, t) Review of the Risk Neutral Stock Dynamics

Lecture 6: Risk and uncertainty

Correlation Structures Corresponding to Forward Rates

Sampling and sampling distribution

IEOR E4703: Monte-Carlo Simulation

Tutorial 6. Sampling Distribution. ENGG2450A Tutors. 27 February The Chinese University of Hong Kong 1/6

SOLUTIONS 913,

Random Variables and Probability Distributions

IEOR E4703: Monte-Carlo Simulation

Introduction Dickey-Fuller Test Option Pricing Bootstrapping. Simulation Methods. Chapter 13 of Chris Brook s Book.

Comparison of Estimation For Conditional Value at Risk

Martingale Pricing Theory in Discrete-Time and Discrete-Space Models

International Finance. Estimation Error. Campbell R. Harvey Duke University, NBER and Investment Strategy Advisor, Man Group, plc.

Measuring the Effects of Foresight and Commitment on Portfolio Performance

A Cash Flow-Based Approach to Estimate Default Probabilities

Course information FN3142 Quantitative finance

Market Risk Analysis Volume IV. Value-at-Risk Models

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2010, Mr. Ruey S. Tsay Solutions to Final Exam

The histogram should resemble the uniform density, the mean should be close to 0.5, and the standard deviation should be close to 1/ 12 =

LECTURE NOTES 3 ARIEL M. VIALE

High-Frequency Data Analysis and Market Microstructure [Tsay (2005), chapter 5]

Gamma. The finite-difference formula for gamma is

The risk/return trade-off has been a

MAS187/AEF258. University of Newcastle upon Tyne

Measuring and managing market risk June 2003

GENERATING DAILY CHANGES IN MARKET VARIABLES USING A MULTIVARIATE MIXTURE OF NORMAL DISTRIBUTIONS. Jin Wang

Chapter 5. Sampling Distributions

Write legibly. Unreadable answers are worthless.

Bayesian Linear Model: Gory Details

Final Exam Suggested Solutions

Chapter 5 Finite Difference Methods. Math6911 W07, HM Zhu

MAFS Computational Methods for Pricing Structured Products

B. Maddah INDE 504 Discrete-Event Simulation. Output Analysis (3)

Properties of the estimated five-factor model

The value of a bond changes in the opposite direction to the change in interest rates. 1 For a long bond position, the position s value will decline

Module 10:Application of stochastic processes in areas like finance Lecture 36:Black-Scholes Model. Stochastic Differential Equation.

PORTFOLIO THEORY. Master in Finance INVESTMENTS. Szabolcs Sebestyén

Lecture 3: Factor models in modern portfolio choice

Appendix. A.1 Independent Random Effects (Baseline)

Simulating the loss distribution of a corporate bond portfolio

Optimizing Portfolios

EE/AA 578 Univ. of Washington, Fall Homework 8

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2011, Mr. Ruey S. Tsay. Solutions to Final Exam.

Computational Finance Improving Monte Carlo

Course objective. Modélisation Financière et Applications UE 111. Application series #2 Diversification and Efficient Frontier

Market Risk Analysis Volume II. Practical Financial Econometrics

Consumption- Savings, Portfolio Choice, and Asset Pricing

Module 2: Monte Carlo Methods

CHOICE THEORY, UTILITY FUNCTIONS AND RISK AVERSION

Transcription:

page 19 A general approach to calculating VaR without volatilities and correlations Peter Benson * Peter Zangari Morgan Guaranty rust Company Risk Management Research (1-212) 648-8641 zangari_peter@jpmorgan.com In the previous RiskMetrics Monitor 1 we described an alternative to the variance-covariance (VCV) method for portfolio risk analysis. We called this new method portfolio aggregation. Recall that portfolio aggregation involves reconstructing a time series of daily portfolio returns from a current set of portfolio positions and daily returns on individual securities. Value-at-Risk (VaR) estimates of such a portfolio are then obtained by computing the portfolio standard deviation directly from the portfolio return series instead of constructing individual volatilities and correlations. In this note we describe the data used in portfolio aggregation and introduce more applications to risk analysis including Monte Carlo simulation. We provide a general framework that end-users can use to produce estimates of VaR. As a specific example of this approach we show how to employ Monte Carlo simulation without computing a covariance matrix. he rest of this article is organized as follows: In section 1 we demonstrate how to compute value-at-risk using without a covariance matrix. Specifically we show how to calculate VaR directly from the underlying return series under equal and exponential weighting schemes. In the case where exponential weighting is applied we show that this VaR calculation is identical to that used by RiskMetrics VCV methodology. In section 2 we explain how to compute Monte Carlo without first constructing a variance/ covariance matrix. Section 3 briefly mentions other applications where covariance matrix is not required Section 4 presents conclusions and direction for future research 1. Using returns time series in place of volatilities and correlations RiskMetrics provides volatilities and correlations for a set of benchmark securities. hese securities are what we use to map actual cashflows. For example suppose we need to compute the VaR of a cashflow denominated in US dollars that occurs in 6 years time. In order to compute VaR we would map this cashflow to the two nearest RiskMetrics nodes which represent the 5 and 7 year US zero rates. he volatility of the log changes on the 5 and 7 year nodes as well as the correlation between the two log changes are then used to (1) find how much to allocated to the two nodes and (2) compute VaR. Note that the 5 and 7 year nodes are what we refer to as the benchmark securities. In these calculations each benchmark has associated with it a time series of volatility adjusted returns i.e. returns divided by their standard deviation. he volatilities and correlations are calculated from the set of all time series and then these time series are discarded. Here we suggest that rather than computing the volatilities and correlations and discarding the time series we work directly with the time series of returns and bypass the calculation of correlations. his provides a number of advantages: Existing time series can be modified directly and new time series added without recalculating the correlations with other time series If returns are to be weighted (e.g exponentially) the weighting scheme can be altered without replacing the dataset * Peter Benson previously with Risk Management Research at J.P. Morgan is now with Greenwich Capital Markets Inc. in Connecticut. 1 See the article Streamlining the market risk measurement process.

Secpmd Quarter 1997 page 20 If there are a large number of benchmarks relative to the number of returns per benchmark the dataset of benchmark returns is smaller than the correlation matrix Better numerical precision An intuitive interpretation of how Monte Carlo returns are generated Fast marginal risk analysis. We now explain how users can work directly with the return time series (RS). Define the RS dataset as follows. For benchmark i let r i be the 1 vector of returns (with mean 0) where is the number of returns (historical observations) for each benchmark. Define r it to be the return in period tof benchmark i. Let R r 1 r 2... r n where n is the number of benchmarks. We can write the x n matrix of returns R as follows. [1] R r 11 r 1n r JJ r 1 r n Since R is a n matrix the RS dataset has n values. Compare this to the VCV dataset which has n standard deviations and n( n 1) 2 correlations for a total of n( n+ 1) 2 values. For situations where the number of benchmarks is more than twice the number of observations RS requires less storage i.e. R requires less storage than its corresponding covariance matrix. 1.1 Relationship to the covariance matrix he fact that we can use the returns directly to compute VaR is made obvious from the following observation: the covariance matrix under equally weighted observations is [2] C 1 R R. where R is the transpose of R. Hence R provides a simple factoring of the covariance matrix. his will be useful later. 1.2 Application to VCV If w is the vector of benchmark equivalents where each element of w w i is the position value associated with one of the n benchmarks VaR is given by the equation: [3] VaR 1.65 w Cw assuming that underlying returns are distributed according to the conditional multivariate normal distribution. Now it follows from [1] that [4] VaR 1.65 w Cw 1.65 w 1 R Rw 1.65 1 Rw. As we can see from the right-hand side of equation [3] the VaR calculation depends only on the benchmark weights w the underlying return matrix R and the number of historical observations. Because the computational effort varies linearly with the number of benchmarks using the RS matrix is faster

page 21 than using the covariance matrix C (provided the number of benchmarks is more than twice the number of observations in each benchmark). he preceding analysis demonstrates how to compute VaR by the VCV method when the data are equally weighted. However we note that this methodology is general and applies equally well when the data are weighted exponentially. 1.3 Applying exponential weighting We now show how similar results to those presented in section 1.2 are obtained when applying exponential weighting. When computing the covariance and correlation matrices use instead of the data matrix R the augmented data matrix R shown in equation [5]. [5] Now we can define the covariance matrix based on exponential weighting simply as [6] where [7] R r 11 r 1n λr 21 λr 21 λ J 1 rjj λ 1 r 1 λ 1 r n C λ i 1 1 R R i 1 Λ 1 R R Λ λ i 1 i 1 It follows immediately from the results presented in [4] that VaR in this case is [8] VaR 1.65Λ 1 R w Once a decay factor λ is selected in the VCV method the correlation matrix is computed and it is no longer possible to change the weight without recomputing the entire covariance matrix. However since there are no assumed weights in the RS dataset we are free to choose different values without any additional computational burden. 2. Generating multivariate normal returns for Monte Carlo o generate multivariate normal returns one typically performs a Cholesky or Singular Value decomposition (SVD) on the correlation matrix 2. he resulting matrix is then combined with a vector of random variates to produce multivariate returns. here are drawbacks to this approach: 2 See Appendix E of the RiskMetrics echnical document 4th edition.

Secpmd Quarter 1997 page 22 he decomposed matrix does not easily provide an intuitive understanding of how the deviates are generated Changing a single return value of one benchmark requires a new decomposition Cholesky decomposition requires that the correlation matrix be PD (positive definite) and SVD requires PSD (positive semi-definite). 3 Using the RS dataset we can take a more direct approach that suggests an intuitive interpretation. Consider a row of the R matrix. It represents one observation interval--a snapshot of all benchmark returns. Suppose we multiply each row by a normally distributed random variate and add the rows. he result is a vector of Monte Carlo returns that have the correct variance and correlations. In other words Monte Carlo returns are simply the sum of random multiples of the benchmark snapshots. We now show how to perform Monte Carlo with the RS that avoids any volatility and correlation calculations. Let ε { ε 1 ε 2... ε } be a 1 vector of independent N(01) random variates. hen we are interested in simulating a 1 n vector of correlated returns ε R. Consider the ith component rˆ i ε r i. Note that ε R is just a sum of independent normally distributed random variables (all with mean zero and standard deviation vector r i ). So the mean of the sum is the sum of the means which are zero. And the variance is the sum of the variances which is the variance of r i. So the mean and variance of rˆ i is the same as that of r i. Unlike r i rˆ i is truly normally distributed. What about the correlation of rˆ i with other simulated returns? Consider another simulated return rˆ j. he correlation is cov r. Since and have mean zero ρˆij ˆ ˆ i rj σ i σ j rˆ i rˆ j [9] cov ( rˆi rˆj ) E[ rˆi rˆj ] E Σ t 1 ε t r it E Σ t 1 ε 2 tr it Σ t 1 ε t r j t r j t 2 + Σ t 1 m Σ s t + 1 ( ε t ε s r ) it r j t Because the ε i are independent the cross terms have zero expected value. Since the expectation of the sum equals the sum of the expectation the right-hand side becomes [10] E Σ t 1 ε 2 2 tr it r j t Σ t 1 r it r j t Eε t Σ t 1 r it r j t cov ( r i r j ) Hence covariance and correlations are also preserved. In general we can show that the covariance matrix of the 1 x n vector of random variables y ε R is C R R. his follows immediately from the definition of the variance of y where the mean of y is zero. Letting E(x) denote the mathematical expectation of x we can write the variance of y as follows: 3 he correlation matrix cannot be PD if there are more benchmarks (columns of R) than there are observations in each benchmark (rows of R). In fact due to roundoff errors even PSD is rarely satisfied. hese problems can be dealt with by tweaking the correlation matrix to become PSD (which introduces other errors) or by increasing the number of observations in each benchmark (which bloats the dataset without adding significant information and may require data that is unavailable or outside the sample of interest). he RS dataset allows analysis with fewer observations. Of course one must still be careful that the chosen dataset is sufficiently representative of the relationships between benchmarks.

page 23 Variance ( y) E y y E R εε R [11] R E εε R C Now certainly Monte Carlo with a covariance matrix represents a performance improvement in terms of pre-processing (i.e. in decomposing a correlation matrix) since we do not need any pre-processing. However is it a fast way to generate deviates? Once the independent normal deviates are generated simulating each benchmark uses multiples. his compares with k (where k is the rank of R) multiplications when using a Cholesky or SVD decomposition. However we can make return generation still faster. By randomly sampling the observation vectors (i.e. using only a subset) we can still preserve correlations and variances. In fact we can generate our Monte Carlo returns with just one random normal deviate per trial and one multiply per benchmark. Details are left for future discussion. 3. Other applications he RS dataset also allows more powerful tools for marginal analysis and ad hoc manipulation of the benchmark returns. he Monte Carlo sampling technique discussed above as well as marginal analysis techniques based on RS are employed by CreditManager (J.P. Morgan s credit risk calculator) for Monte Carlo simulation of credit portfolios. Details of these methods may be covered in a subsequent note. 4. Conclusions As the number of benchmarks grows relative to the effective number of observations per benchmark it becomes more efficient to use the RS dataset. Moreover using the RS dataset provides a number of benefits over VCV-based approaches in terms of flexibility. In particular we ve shown that it provides an intuitively appealing technique for generating Monte Carlo samples.