Measuring Financial Risk using Extreme Value Theory: evidence from Pakistan

Similar documents
An Application of Extreme Value Theory for Measuring Financial Risk in the Uruguayan Pension Fund 1

Analysis of extreme values with random location Abstract Keywords: 1. Introduction and Model

An Application of Extreme Value Theory for Measuring Risk

Extreme Values Modelling of Nairobi Securities Exchange Index

Introduction to Algorithmic Trading Strategies Lecture 8

Financial Risk Forecasting Chapter 9 Extreme Value Theory

Modelling catastrophic risk in international equity markets: An extreme value approach. JOHN COTTER University College Dublin

Mongolia s TOP-20 Index Risk Analysis, Pt. 3

Relative Error of the Generalized Pareto Approximation. to Value-at-Risk

Modelling Environmental Extremes

Modelling Environmental Extremes

Risk Analysis for Three Precious Metals: An Application of Extreme Value Theory

Modelling insured catastrophe losses

MEASURING EXTREME RISKS IN THE RWANDA STOCK MARKET

Scaling conditional tail probability and quantile estimators

Modelling of extreme losses in natural disasters

Value at Risk Estimation Using Extreme Value Theory

ADVANCED OPERATIONAL RISK MODELLING IN BANKS AND INSURANCE COMPANIES

Advanced Extremal Models for Operational Risk

A New Hybrid Estimation Method for the Generalized Pareto Distribution

AN EXTREME VALUE APPROACH TO PRICING CREDIT RISK

NEWCASTLE UNIVERSITY SCHOOL OF MATHEMATICS & STATISTICS SEMESTER /2013 MAS8304. Environmental Extremes: Mid semester test

The extreme downside risk of the S P 500 stock index

Characterisation of the tail behaviour of financial returns: studies from India

Fitting the generalized Pareto distribution to commercial fire loss severity: evidence from Taiwan

I. Maxima and Worst Cases

QQ PLOT Yunsi Wang, Tyler Steele, Eva Zhang Spring 2016

An Introduction to Statistical Extreme Value Theory

Estimate of Maximum Insurance Loss due to Bushfires

Linda Allen, Jacob Boudoukh and Anthony Saunders, Understanding Market, Credit and Operational Risk: The Value at Risk Approach

Bivariate Extreme Value Analysis of Commodity Prices. Matthew Joyce BSc. Economics, University of Victoria, 2011

Measures of Extreme Loss Risk An Assessment of Performance During the Global Financial Crisis

Comparative Analyses of Expected Shortfall and Value-at-Risk under Market Stress

GPD-POT and GEV block maxima

Financial Risk 2-nd quarter 2012/2013 Tuesdays Thursdays in MVF31 and Pascal

Goran Andjelic, Ivana Milosev, and Vladimir Djakovic*

Paper Series of Risk Management in Financial Institutions

Modelling Joint Distribution of Returns. Dr. Sawsan Hilal space

International Business & Economics Research Journal January/February 2015 Volume 14, Number 1

Extreme Value Theory with an Application to Bank Failures through Contagion

Extreme Market Risk-An Extreme Value Theory Approach

RISK EVALUATION IN FINANCIAL RISK MANAGEMENT: PREDICTION LIMITS AND BACKTESTING

Modelling Kenyan Foreign Exchange Risk Using Asymmetry Garch Models and Extreme Value Theory Approaches

Financial Econometrics (FinMetrics04) Time-series Statistics Concepts Exploratory Data Analysis Testing for Normality Empirical VaR

Risk Management and Time Series

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2017, Mr. Ruey S. Tsay. Solutions to Final Exam

Generalized MLE per Martins and Stedinger

Forecasting Value-at-Risk using GARCH and Extreme-Value-Theory Approaches for Daily Returns

Risk Management in the Financial Services Sector Applicability and Performance of VaR Models in Pakistan

J. The Peaks over Thresholds (POT) Method

2002 Statistical Research Center for Complex Systems International Statistical Workshop 19th & 20th June 2002 Seoul National University

ANALYZING VALUE AT RISK AND EXPECTED SHORTFALL METHODS: THE USE OF PARAMETRIC, NON-PARAMETRIC, AND SEMI-PARAMETRIC MODELS

A STATISTICAL RISK ASSESSMENT OF BITCOIN AND ITS EXTREME TAIL BEHAVIOR

Analysis of the Oil Spills from Tanker Ships. Ringo Ching and T. L. Yip

VaR versus Expected Shortfall and Expected Value Theory. Saman Aizaz (BSBA 2013) Faculty Advisor: Jim T. Moser Capstone Project 12/03/2012

EVA Tutorial #1 BLOCK MAXIMA APPROACH IN HYDROLOGIC/CLIMATE APPLICATIONS. Rick Katz

A Markov Chain Monte Carlo Approach to Estimate the Risks of Extremely Large Insurance Claims

Overnight borrowing, interest rates and extreme value theory

Generalized Additive Modelling for Sample Extremes: An Environmental Example

Stochastic model of flow duration curves for selected rivers in Bangladesh

John Cotter and Kevin Dowd

Window Width Selection for L 2 Adjusted Quantile Regression

Research Article Multiple-Event Catastrophe Bond Pricing Based on CIR-Copula-POT Model

ANALYSIS. Stanislav Bozhkov 1. Supervisor: Antoaneta Serguieva, PhD 1,2. Brunel Business School, Brunel University West London, UK

The Use of Penultimate Approximations in Risk Management

REINSURANCE RATE-MAKING WITH PARAMETRIC AND NON-PARAMETRIC MODELS

Data Analysis and Statistical Methods Statistics 651

Portfolio Optimization. Prof. Daniel P. Palomar

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2010, Mr. Ruey S. Tsay Solutions to Final Exam

Value at Risk and Self Similarity

Case Study: Heavy-Tailed Distribution and Reinsurance Rate-making

Overnight borrowing, interest rates and extreme value theory

Value at Risk Analysis of Gold Price Returns Using Extreme Value Theory

THRESHOLD PARAMETER OF THE EXPECTED LOSSES

Applying GARCH-EVT-Copula Models for Portfolio Value-at-Risk on G7 Currency Markets

Modeling Credit Risk of Loan Portfolios in the Presence of Autocorrelation (Part 2)

Data Analysis and Statistical Methods Statistics 651

Section B: Risk Measures. Value-at-Risk, Jorion

A Generalized Extreme Value Approach to Financial Risk Measurement

Quantifying Operational Risk within Banks according to Basel II

risks When the U.S. Stock Market Becomes Extreme? Risks 2014, 2, ; doi: /risks ISSN Article

Calculating VaR. There are several approaches for calculating the Value at Risk figure. The most popular are the

Extreme Value Theory for Risk Managers

Estimation of Value at Risk and ruin probability for diffusion processes with jumps

Lecture 6: Non Normal Distributions

Measuring risk of crude oil at extreme quantiles *1

IEOR E4602: Quantitative Risk Management

Gamma Distribution Fitting

MODELLING OF INCOME AND WAGE DISTRIBUTION USING THE METHOD OF L-MOMENTS OF PARAMETER ESTIMATION

PARAMETRIC AND NON-PARAMETRIC BOOTSTRAP: A SIMULATION STUDY FOR A LINEAR REGRESSION WITH RESIDUALS FROM A MIXTURE OF LAPLACE DISTRIBUTIONS

Key Words: emerging markets, copulas, tail dependence, Value-at-Risk JEL Classification: C51, C52, C14, G17

Modeling the extremes of temperature time series. Debbie J. Dupuis Department of Decision Sciences HEC Montréal

CHAPTER II LITERATURE STUDY

Managing Risk with Energy Commodities using Value-at-Risk and Extreme Value Theory

Week 7 Quantitative Analysis of Financial Markets Simulation Methods

A Comparison Between Skew-logistic and Skew-normal Distributions

SYSM 6304 Risk and Decision Analysis Lecture 2: Fitting Distributions to Data

The Impact of Risk Controls and Strategy-Specific Risk Diversification on Extreme Risk

Fitting financial time series returns distributions: a mixture normality approach

Graduate School of Business, University of Chicago Business 41202, Spring Quarter 2007, Mr. Ruey S. Tsay. Solutions to Final Exam

Transcription:

Measuring Financial Risk using Extreme Value Theory: evidence from Pakistan Dr. Abdul Qayyum and Faisal Nawaz Abstract The purpose of the paper is to show some methods of extreme value theory through analysis of Pakistani financial data. It also introduced the fundamental of extreme value theory as well as practical aspects for estimating and assessing financial models for tail related risk measures. KEY WORDS: Extreme Value Theory; Value at Risk; Financial Risk Management; Financial Risk Modeling; Financial Time Series. 1 Introduction This study was triggered by the late crisis of October 2008 in the Global Markets not only in America but also in Europe, and Asia. The former chief of the US Federal Reserve (Green span) met with the Congress and explained the crisis as Tsunami of the Financial Markets that occurs once in a Century and stated that the financial models which have been used during the past 4 decades were useless in face of the snow ball effects of what began as a small problem in the real estate market in the United States. This has lead to numerous criticisms about the existing risk management systems and motivated the search for more appropriate methodologies able to cope with rare events that have heavy consequences. The typical question one would like to answer is: If things go wrong, how wrong can they go? The problem is then how can we model the rare phenomena that lie outside the range of available observations. In such a situation it seems essential to rely 1

on a well founded methodology. VaR, Extreme Value Theory provides a firm theoretical foundation on which we can build statistical models describing extreme events. This paper deals with the behavior of the tails of Pakistani stock returns. More specifically, the focus is on the use of extreme value theory to assess tail related risk; it thus aims at providing a modeling tool for modern risk management. 2 Risk Measures We will discuss two risk measures Value-at-Risk (VaR) and the return level. Value at risk (VaR) is a method of assessing risk that uses standard statistical and mathematical techniques used routinely in other technidal fields. Loosely V ar summarizes the worst loss over a target horizon that will not be exceeded with a given level of confidence 1.VaR provides an accurate statistical estimate of the maximum probable loss on a portfolio when markets are behaving normally. VaR is typically calculated for one day time period known as the holding period. A 99% confidence level means that there is (on average) a 1% chance of the loss being in excess of that VaR. VaR can then be defined as the p-th quantile of the distribution F, V ar p =F 1 (1 p), F 1 is the so called quantile function defined as the inverse of the distribution function F. Return level can be defined as if H is the distribution of the maxima observed over successive non overlapping periods of equal length, the return level Rn k = H 1 (1 1 ) is the level expected to be exceeded in one out of k periods of k length n. 3 Extreme Value Theory EVT has two significant results. First, the asymptotic distribution of a series of maxima (minima) is modeled and under certain conditions the distribution of the standardized maximum of the series is shown to converge to the Gumbel, Frechet, or Weibull distributions. A standard form of these three distributions is called the generalized extreme value (GEV) distribution. The second significant result concerns the distribution of excess over 1 see Jorion (2007), page 17. 2

a given threshold, where one is interested in modeling the behavior of the excess loss once a high threshold (loss) is reached. This result is used to estimate the very high quantiles (0.999 and higher). EVT shows that the limiting distribution is a generalized Pareto distribution (GPD). 3.1 The GEV Distribution (Fisher-Tippett (1928), Gnedenko (1943) Result) Let {X 1,..., X n } be a sequence of independent and identically distributed (iid) random variables. The maximum X n = Max (X 1,..., X n ) converges in law (weakly) to the following distribution: e H ξ,µ,σ (x) = (x µ) (1+ξ σ (x µ) e σ e ) 1 ξ if ξ 0 if ξ = 0 While 1 + ξ (x µ) > 0. The parameters µ and σ correspond, respectively, to a σ scalar and a tendency; the third parameter, ξ, called the tail index, indicates the thickness of the tail of the distribution. The larger the tail index, the thicker the tail. When the index is equal to zero, the distribution H corresponds to a Gumbel type. When the index is negative, it corresponds to a Weibull; when the index is positive, it corresponds to a Frechet distribution. The Frechet distribution corresponds to fat-tailed distributions and has been found to be the most appropriate for fat-tailed financial data. This result is very significant, since the asymptotic distribution of the maximum always belongs to one of these three distributions, whatever the original distribution. The asymptotic distribution of the maximum can be estimated without making any assumptions about the nature of the original distribution of the observations (unlike with parametric VaR methods), that distribution being generally unknown. (1) 3.2 The Excess Beyond A Threshold (Pickands (1975), Balkema-de Haan (1974)) Result After estimating the maximum loss (in terms of VaR or another methodology), it would be interesting to consider the residual risk beyond this maximum. The second result of the EVT involves estimating the conditional distribution of the excess beyond a very high threshold. Let X be a random 3

variable with a distribution F and a threshold given x F,for µ fixes < x F. F µ is the distribution of excesses of X over the threshold µ F u (x) = P (X u x X > u), x 0 Once the threshold is estimated (as a result of a VaR calculation, for example), the conditional distribution F µ is approximated by a GPD. We can write: F u (x) G ξ,β(u) (x), u, x 0 where: ( ) 1 G ξ,β(u) (x) = 1 1 + ξ x ξ if ξ 0 β 1 e x β if ξ = 0 Distributions of the type H are used to model the behavior of the maximum of a series. The distributions G of the second result model excess beyond a given threshold, where this threshold is supposed to be sufficiently large to satisfy the condition µ for a more technical discussion, see Castillo and Hadi (1997). The application of EVT involves a number of challenges. The early stage of data analysis is very important in determining whether the series has the fat tail needed to apply the EVT results. Also, the parameter estimates of the limit distributions H and G depend on the number of extreme observations used. The choice of a threshold should be large enough to satisfy the conditions to permit its application (µ ), while at the same time leaving sufficient observations for the estimation. Finally, until now, it was assumed that the extreme observations are iid. The choice of the method for extracting maxima can be crucial in making this assumption viable. However, there are some extensions to the theory for estimating the various parameters for dependent observations; see Embrechts, Kluppelberg and Mikosch (1999) and Embrechts and Schmidli (1994). (2) 4 Modeling the Fat Tails of Stock Return Our aim is to model the tail of the distribution of KSE (Karachi Stock Exchange) 100 index negative movements in order to estimate extreme quantiles. We analyze the daily returns of the KSE 100 index for the period from 29-06-1993 to 27-05-2009. The application has been executed in a MATLAB 7.x programming environment. Figure 1 shows the plot of the n = 3, 762 observed daily returns. 4

Figure 1: Daily returns of the KSE 100 index. (a) (b) Figure 2: (a) QQ plot of KSE returns against normal and (b) Sample mean excess plot. 4.1 Explanatory Data Analysis The main explanatory tools used in EVT (Extreme Value Theory) are the quantile quantile plot (QQ plot) and the sample excess plot (ME plot). QQ plot makes it possible to assess how well the selected model fits the tail of the empirical distribution. For example, if the series is approximated by a normal distribution and if the empirical data are fat-tailed, the graph will show a curve to the top at the right end or to the bottom at the left end. This is the case of the QQ plot of all the KSE returns against the normal (Figure 2a). Another graphical tool that is helpful for the selection of the threshold µ is the sample mean excess plot. The sample mean excess function is an estimate of the mean excess function E(µ). For the GPD it is linear. Figure 2b shows the sample mean excess plot corresponding to our data. First, we will model the exceedances over a given threshold which will enable us to estimate high quantiles and the corresponding expected shortfall. Second, we will consider the distribution of the so called block maxima, which then 5

allows the determination of the return level. 4.2 The Peak over Threshold (POT) Method Given the theoretical results presented in the previous section, we know that the distribution of the observations above the threshold in the tail should be a generalized Pareto distribution (GPD). This is confirmed by the Figure3. Figure 3: GPD fitted to the exceedances above the threshold 0.0292. We use the maximum likelihood estimation method. We obtained the estimates ˆξ = 0.0502 ˆσ = 0.0120. High quantiles may now be directly read in the plot or computed from above equation where we replace the parameters by their estimates. V ar gev = ˆµ + ˆσˆξ { } ( ln(p)) ξ 1 (3) For instance if we choose p = 0.01, we can compute V ar = 0.0493. Mc- Neil, Frey, Embrechts et al. (2005) discussed this method in detail. 2 Table 1 summarizes the point estimates, the maximum likelihood (ML) and the bootstrap (BS) confidence intervals of the marginal distributions. The results in Table 1 indicate that with probability 0.01 the tomorrows loss will exceed the value 4.93%. These point estimates are completed with 95% confidence intervals. Thus the expected loss will, in 95 out of 100 cases, lie between 4.45% and 5.66%. As an alternative to confidence intervals, we can also compute an approximation to the asymptotic covariance matrix of 2 For detail on backtesting VaR based on tail losses, see Wong (2009). 6

Table 1: Point estimates and 95% maximum likelihood (ML) and bootstrap (BS) confidence intervals for the POT method. Lower bound Point estimate Upper bound BS ML ML ML BS ˆξ -0.079-0.080 0.050 0.181 0.217 ˆσ 0.0101-0.009 0.012 0.0145 0.0142 V ar 0.01 0.0445 0.0440 0.0493 0.0563 0.0566 (a) (b) Figure 4: (a) Bootstrap estimation of ξ and σ and (b) QQ plot for estimates of ξ and log(σ). the parameter estimates, and from that extract the parameter standard errors. Which is 0.666 and 0.0012 for ξ, σ respectively. Interpretation of the standard errors usually involves assuming that, if the same fit could be repeated many times on data that came from the same source, the maximum likelihood estimates of the parameters would approximately follow a normal distribution. For example, confidence intervals are often based this assumption. However, that normal approximation may or may not be a good one. To assess how good it is in this example, we can use a bootstrap simulation. We will generate 1000 replicate datasets by re-sampling from the data; fit a GP distribution to each one, and save all the replicate estimates. As a rough check on the sampling distribution of the parameter estimators, we can look at histograms of the bootstrap replicates (Figure 4a). The histogram of the bootstrap estimates for k appears to be only a little asymmetric, while that for the estimates of sigma definitely appears skewed to the right. A common remedy for that skewness is to estimate the parameter 7

and its standard error on the log scale, where a normal approximation may be more reasonable. A Q-Q plot is a better way to assess normality than a histogram, because non-normality shows up as points that do not approximately follow a straight line. Let s check that to see if the log transform for sigma is appropriate. The bootstrap estimates for ξ, log(σ) appear acceptably close to normality. A Q-Q plot for the estimates of σ, on the unlogged scale, would confirm the skewness that we ve already seen in the histogram. Thus, it would be more reasonable to construct a confidence interval for σ by first computing one for under the assumption of normality, and then exponentiating to transform that interval back to the original scale for σ (Figure 4b). 4.3 Method of Block Maxima We now apply the block maxima method to our daily return data. The calendar naturally suggests periods like months, quarters, etc. We choose weekly periods. Thus our sample has been divided into 345 non-overlapping sub-samples, each of them containing the daily returns of the successive calendar weeks. The absolute value of the minimum return in each of the blocks constitutes the data points in the sample of minima which are used to estimate the generalized extreme value distribution (GEV). The standard GEV is the limiting distribution of normalized extrema. The log-likelihood estimates we obtain are ˆ(ξ) = 0.1672 ˆ(σ) = 0.0104, ˆ(µ = 0.0170). In Figure 5a, we give the plot of the sample distribution and the corresponding fitted GEV distribution. In practice the quantities of interest are not the parameters themselves, but the quantiles, also called return levels, of the estimated GEV. The return level R k is the level we expect to be exceeded in one out of k one year periods: ( R k = H 1 ξ,σ,µ 1 1 ) k Substituting the parameters ξ, σ and µ by their estimates we get { ( R k û = ˆσˆξ 1 ( log ( )) ) 1 1 ξ if ξ 0 k û ˆσlog ( log ( )) 1 1 if ξ = 0 k (4) Taking for example k = 10, we obtain for our data R 10 = 0.0455, which means that the maximum loss observed during a period of one week will exceed 4.55% in one out of ten weeks on average. We could compute confidence limits for R 10 using asymptotic approximations, but those may not be valid. 8

(a) (b) Figure 5: (a) Sample distribution of weekly maxima and corresponding fitted GEV distribution and (b) Relative Profile log-likelihood function and 95% confidence interval for R 10. Table 2: Point estimates and 95% maximum likelihood (ML) and bootstrap (BS) confidence intervals for the GEV method. Lower bound Point estimate Upper bound BS ML ML ML BS ˆξ 0.089 0.075 0.167 0.259 0.249 ˆσ 0.009 0.009 0.010 0.011 0.011 R 10 0.041 0.042 0.045 0.050 0.052 Instead, we will use a likelihood-based method to compute confidence limits. This method often produces more accurate results than one based on the estimated covariance matrix of the parameter estimates. The relative profile log-likelihood and is plotted in the Figure 5b. For α = 0.05 the interval estimate for R 10 is [0.0407; 0.0502]. We also generated 1000 bootstrap samples and computed the bootstrap confidence intervals for GEV parameters and R 10 (Table2). 5 Conclusion We presented some methods of EVT to analyze financial data also with the scope of illustrating how powerful these methods are. We also compared V ar estimated with the POT method with the V ar proposed by the Basel 9

accord. Assuming the normal distribution for the observations until 1989, the 1% lower quantile is 1.95. Multiplying this value by 3 gives 5.86, whereas in our calculation the upper bound for V ar is 4.93 (Table 1). Clearly the POT method provides more accurate information. This analysis is a good starting point and it demonstrated that EVT could play an important role in the field of risk management. References Balkema, A.A. and L. de Haan (1974), Residual life time at great age, Annals of Probability 2, 792 804. Castillo, E. and A.S. Hadi (1997), Fitting the Generalized Pareto Distribution to Data., Journal of the American Statistical Association 92(440), 1609 1620. Embrechts, P., C. Kluppelberg and T. Mikosch (1999), Modelling extremal events, British Acturial Journal 5(2), 465 465. Embrechts, P. and H. Schmidli (1994), Modelling of extremal events in insurance and finance, Mathematical Methods of Operations Research 39(1), 1 34. Fisher, RA and LHC Tippett (1928), Limiting forms of the frequency distribution in the largest particle size and smallest number of a sample, in Proceedings of the Cambridge Philosophical Society, Vol. 24, pp. 180 190. Gnedenko, B. (1943), Sur la distribution limite du terme maximum d une série aléatoire, Annals of Mathematics 44(3), 423 453. Jorion, P. (2007), Value at risk: the new benchmark for managing financial risk, McGraw-Hill New York. McNeil, A.J., R. Frey, P. Embrechts et al. (2005), Quantitative risk management: concepts, techniques, and tools, Princeton university press Princeton, NJ. Pickands, J.I. (1975), Statistical inference using extreme order statistics, the Annals of Statistics 3(1), 119 131. Wong, W.K. (2009), Backtesting Value-at-Risk based on tail losses, Journal of Empirical Finance. 10