FRTB. NMRF Aggregation Proposal

Similar documents
Basel Committee on Banking Supervision. Explanatory note on the minimum capital requirements for market risk

Market Risk: FROM VALUE AT RISK TO STRESS TESTING. Agenda. Agenda (Cont.) Traditional Measures of Market Risk

Fundamental Review of the Trading Book

June 20, Japanese Bankers Association

Validation of Nasdaq Clearing Models

A new breed of Monte Carlo to meet FRTB computational challenges

Market Risk and the FRTB (R)-Evolution Review and Open Issues. Verona, 21 gennaio 2015 Michele Bonollo

FSRR Hot Topic. CRD 5 FRTB Sizing up the trading book. Stand out for the right reasons Financial Services Risk and Regulation. 1.

EACB Comments on the Consultative Document of the Basel Committee on Banking Supervision. Fundamental review of the trading book: outstanding issues

2nd Order Sensis: PnL and Hedging

Deutsche Bank s response to the Basel Committee on Banking Supervision consultative document on the Fundamental Review of the Trading Book.

GN47: Stochastic Modelling of Economic Risks in Life Insurance

REAL PRICE DATA AND RISK FACTOR MODELLABILITY CHALLENGES AND OPPORTUNITIES

FRTB: an industry perspective on the IT changes needed October 2015

THE INSURANCE BUSINESS (SOLVENCY) RULES 2015

The Fundamental Review of the Trading Book and Emerging Markets

INVESTMENT SERVICES RULES FOR RETAIL COLLECTIVE INVESTMENT SCHEMES

Fundamental Review of the Trading Book

Model Construction & Forecast Based Portfolio Allocation:

Predicting Inflation without Predictive Regressions

Standard Initial Margin Model (SIMM) How to validate a global regulatory risk model

Basel Committee on Banking Supervision. Consultative Document. Revisions to the minimum capital requirements for market risk

FS PERSPE PER C SPE TIVES C

PRE CONFERENCE WORKSHOP 3

1 Volatility Definition and Estimation

Basel Committee on Banking Supervision. Frequently asked questions on market risk capital requirements

FINANCIAL SERVICES FLASH REPORT

Calculating VaR. There are several approaches for calculating the Value at Risk figure. The most popular are the

I should firstly like to say that I am entirely supportive of the objectives of the CD, namely:

RE: Consultative Document, Simplified alternative to the standardised approach to market risk capital.

EU IMPLEMENTATION OF REVISED MARKET RISK AND COUNTERPARTY CREDIT RISK FRAMEWORKS

Avantage Reply FRTB Implementation: Stock Take in the Eurozone and the UK

Point Estimation. Some General Concepts of Point Estimation. Example. Estimator quality

Re: Industry Response to the Revised Standardized Approach for Market Risk

Improving Returns-Based Style Analysis

Fundamental Review of The Trading Book The road to IMA

Portfolio diversification in the fundamental review of the trading book

Prudential sourcebook for Investment Firms. Chapter 6. Market risk

IRC / stressed VaR : feedback from on-site examination

Fundamental Review Trading Books

Basel Committee on Banking Supervision. Frequently asked questions on Basel III monitoring

Cherry, Bekaert & Holland, L.L.P. The Allowance for Loan Losses and Current Credit Trends

RISKMETRICS. Dr Philip Symes

Prudential Standard APS 117 Capital Adequacy: Interest Rate Risk in the Banking Book (Advanced ADIs)

Economic Scenario Generation: Some practicalities. David Grundy July 2011

UniCredit reply to Basel Committee second consultation on Fundamental review of the trading book

Fundamental Review of the Trading Book

The Basel Committee s December 2009 Proposals on Counterparty Risk

PARAMETRIC AND NON-PARAMETRIC BOOTSTRAP: A SIMULATION STUDY FOR A LINEAR REGRESSION WITH RESIDUALS FROM A MIXTURE OF LAPLACE DISTRIBUTIONS

Note on Cost of Capital

Measurement of Market Risk

Stochastic Modelling: The power behind effective financial planning. Better Outcomes For All. Good for the consumer. Good for the Industry.

Discussion Paper on the Implementation in the European Union of the revised market risk and counterparty credit risk frameworks

Lloyd s Minimum Standards MS13 Modelling, Design and Implementation

Minimum capital requirements for market risk

The BBA is pleased to respond to this consultation on the net stable funding ratio. Please find below are comments on the key issues in the paper.

FRTB final rule. Further amendments made in the January 2019 revision to the market risk framework (BCBS 457) Research and Development

Alternative VaR Models

The Fundamental Review of the Trading Book - Tackling a new approach for market risk

International Association of Insurance Supervisors (IAIS) Public Consultation: Risk-based Global Insurance Capital Standard Version 2.

Basel Committee on Banking Supervision

Basel Committee on Banking Supervision. Minimum capital requirements for market risk

Sensex Realized Volatility Index (REALVOL)

Fundamental review of the trading book - consultative document

Fundamental Review of the Trading Book (FRTB)

Subject: NVB reaction to BCBS265 on the Fundamental Review of the trading book 2 nd consultative document

RE: Revisions to the Minimum Capital Requirements for Market Risk, March 2018

Copyright 2011 Pearson Education, Inc. Publishing as Addison-Wesley.

University of Colorado at Boulder Leeds School of Business Dr. Roberto Caccia

Applying IFRS. ITG discusses IFRS 9 impairment issues at December 2015 ITG meeting. December 2015

P2.T5. Market Risk Measurement & Management. Bruce Tuckman, Fixed Income Securities, 3rd Edition

The Submission of. William M. Mercer Limited. The Royal Commission on Workers Compensation in British Columbia. Part B: Asset/Liability Study

Risk e-learning. Modules Overview.

CVA Capital Charges: A comparative analysis. November SOLUM FINANCIAL financial.com

FRTB. (fundamental review of the trading book) January kpmg.co.za

Guidance paper on the use of internal models for risk and capital management purposes by insurers

24 June Dear Sir/Madam

Interaction between the prudential and accounting framework - Expected losses

CHAPTER III RISK MANAGEMENT

Guideline. Capital Adequacy Requirements (CAR) Chapter 8 Operational Risk. Effective Date: November 2016 / January

Life and Health Actuarial Task Force

QIS5 Consultation Feedback: High Level Issues

Applied Macro Finance

AN INTERNAL MODEL-BASED APPROACH

I. Proportionality in the market risk framework + simplified Standardised Approach ("SA")

VOLATILITY FORECASTING IN A TICK-DATA MODEL L. C. G. Rogers University of Bath

Aiming at a Moving Target Managing inflation risk in target date funds

Critical Analysis of the New Basel Minimum Capital Requirements for Market Risk

Machine Learning for Quantitative Finance

How to Trade Options Using VantagePoint and Trade Management

Field Guide to Internal Models under the Basel Committee s Fundamental review of the trading book framework

FBF RESPONSE TO EBA CONSULTATION PAPER ON THE REVISION OF OPERATIONAL AND SOVEREIGN PART OF THE ITS ON SUPERVISORY REPORTING (EBA/CP/2016/20)

Basel Committee on Banking Supervision. Consultative document. Guidelines for Computing Capital for Incremental Risk in the Trading Book

Methodological Framework

An Introduction to Solvency II

Future Market Rates for Scenario Analysis

Passing the repeal of the carbon tax back to wholesale electricity prices

Managing the Uncertainty: An Approach to Private Equity Modeling

Appendix A (Pornprasertmanit & Little, in press) Mathematical Proof

Transcription:

FRTB NMRF Aggregation Proposal June 2018 1

Agenda 1. Proposal on NMRF aggregation 1.1. On the ability to prove correlation assumptions 1.2. On the ability to assess correlation ranges 1.3. How a calculation could be performed 1.4 Proposed wording 1.5 Benefits of the method 2. Proposal on modellability assessment 3. Calibration of DRC correlations 2 All Rights Reserved 2018 CAPTEO 2

1. Proposal on NMRF aggregation Rationale: The current proposal for NMRF aggregation (simple sum) is extremely conservative, to the point that there is no credible correlation structure that could yield such a capital addon. The proposal below offers a robust method to calculate an addon under the worst credible correlation. We suggest that this is an appropriate method to assess the required capital for NMRFs. The method which is developed below is transparent, easy to communicate to all stakeholders, and incentivizes bank to produce high quality and robust models by relying on measures performed on the model outputs (1) for the estimate of the credible range of correlations. The proposed method adds a degree of complexity to the NMRF calculation, but: - the idea is conceptually sound, easy to communicate and provably conservative, - it doesn t rely on un-provable correlations, - the added complexity mostly is an analysis of residual (1) correlations which brings risk management benefits, - this method allows to use the same rule for all sorts of NMRFs, including for credit and equities, - it allows to materially reduce the capital addon, but in a way which depends on the risk management model quality and on the actual diversification of those NMRFs. We first develop why allowing aggregation under a zerocorrelation assumption provided that the appropriateness of the assumption is proved might be un-workable (a), why credible correlation ranges are assessable (b), and how a worst credible correlation assumption might be calculated (c). Language for inclusion in the amended regulation is proposed (d). In conclusion, we indicate the benefits that this method would yield (e). 1 Most banks will use projections and other transformations in order to reduce NMRFs to a core unobservable component, or residual. This method penalizes poor transformations which would produce chaotic or correlated Time Series. All Rights Reserved 2018 CAPTEO 3

1.1. On the ability to prove correlation assumptions Correlation assumptions, including the proposed aggregation under an assumption of zero-correlation are typically difficult to prove, even when available Time Series are of high quality for reasons set out below. Many NMRFs will not be associated with high quality historical data. a) Correlation estimates require many independent observations When measuring correlation between two Time Series, the error of the estimate is approximately a function of the number of independent returns which can be calculated, that is a function of Length of observation Liquidity Horizon (2). The error when measuring correlations tend to be large; for instance, measuring the correlation between two uncorrelated Gaussian series of 100 returns, the standard deviation of the measured correlation is 10.0% assuming a constant volatility. It is not possible to assess correlations in a stressed period using a Liquidity Horizon that would be long enough to match NMRF Liquidity Horizons. Even using 10 days. Liquidity Horizon, it takes 4 years to get 100 independent points, and volatility is variable in reality. b) Variable volatility increase the error of correlation measurements. The noise when measuring the correlation between two Time Series is increased by a non-constant distribution. NMRF stresses will be calibrated on period of stresses, so with an expectation of a different volatility regime in part of the historical data. Even the existence of a few days under a different volatility regime drastically deteriorates the quality of measured correlations. If, in the example above, out of 100 returns, 10 returns are simulated with a 4-fold increase in volatility, the standard deviation of the measured correlation jumps to 19.8%. A single day with a 4-fold increase would increase volatility to 15.8%. c) It is not generally possible to assess the reason for the variability of correlation measures. The existence of volatility regime in stressed periods is generally obvious to human observers, but measuring the effect of the change in volatility (which may or may not be gradual) of the estimates of correlations is impossible unless strong assumptions are first made, such as the invariability of correlations. This assumption is equivalent to assessing individual stresses on the stressed period, but assessing the lack of correlation in a non-stressed period, and the amount of data required for such an analysis might still be more than is available. In the example above, observing an average correlation of 0, but a large standard deviation doesn t disprove the 0- correlation assumption, but neither proves its appropriateness. Reaching an average correlation of 0 is straightforward (3), but justifying that in so-doing, residuals can be represented as independent random variables might not be achievable (4). 2 As shown in Part 3, the use of overlapping returns doesn t improve significantly the variability of measured correlations. In Part 3, the context of DRC is taken, and the use of 1 year return over 11 years (that is 2500 returns) yields an accuracy which is comparable to this when using 14 independent returns, a lot closer to 11 than to 2500. 3 It is an effect of a Least Square regression, regardless of whether the assumption of a constant volatility is met. 4 Where different volatility regimes are present, a Least Square regression will give a disproportionate weight to those returns with high volatility, resulting in an inaccurate regression. Reducing the weight on stressed days is akin to assessing correlations mostly on non-stressed periods. All Rights Reserved 2018 CAPTEO 4

1.2. On the ability to assess correlation ranges It can be difficult to prove a correlation assumption, in particular when the period of measurement is short and when volatilities are variable. The difficulty in that assessment is that measured correlations can be extremely noisy in the periods which are the most relevant to a risk model. A consequence of that noise is that the range of measured correlations is wider than the range of underlying correlations (5). If an aggregation method relies on a range of correlation rather than on a single correlation, and that a wider range of possible correlations yields a more conservative aggregation, then the use of that noisy estimate of correlation ranges is conservative. It is possible, when a sufficient set of Time Series is available to use a confidence interval measured on historical correlation as a conservative proxy of the confidence interval on the underlying correlation (6). In performing that estimate, only Risk Factors which underwent the same projection/transformations should be considered as a set. In the context of NMRFs, there will generally be a sufficient number of similar Risk Factors inside each Standard Approach bucket (7) to produce a good quality and conservative estimate of the credible range of correlations both between Risk Factors of that bucket and between Risk Factors of that bucket and other Risk Factors that would be aggregated in a single NMRF capital addon. 5 Measured correlation = Genuine correlation + Measurement Noise. The amplitude of the noise decreases when the genuine correlation gets closer to -100% or +100%, but this is unlikely to be material when genuine correlation levels are within [-50%, +50%] heteroscedasticity doesn t prevent the use of an empirical confidence interval. 6 When a large set of high quality data is available, it is possible to refine the interval. 7 That is, for Internal Model RF, that sensitivities to those Internal Model Risk Factors would accrue in a Standard Approach Risk Factor of that bucket. All Rights Reserved 2018 CAPTEO 5

1.3. How a calculation could be performed In order to not recognize correlations which are not recognized in the SBA aggregation framework, we propose a separate aggregation for each Risk Class and Risk Type: Separate NMRF capital addon would be calculated for IR-Delta, IR-Vega, FX- Delta, etc. Firms would add to the framework relevant Risk Types which don t exist in the Standard Approach, such as IR- Correlation, Equity-Correlation etc. Those are designated below as Risk Class/Type or RC/T. The first step of the calculation is an estimate of the ES equivalent stresses. Those stresses are not converted into capital addons (as the sign of the stresses matter) but in signed charges. Where the shock up result in a P&L of x and the shock down in y, the charge associated with Risk Factor i is: C(i)=Sign(x) Max( x, y ) The second step is an estimate of correlation ranges. For each bucket b {1 N}, two correlation ranges are estimated: r Bucket b = ρ 1, ρ 2 such that ρ 1, ρ 2 is the 90% confidence interval for the correlation between two Risk Factor within bucket b. r b RC/T = ρ3, ρ 4 such that ρ 3, ρ 4 is the 90% confidence interval for the correlation between a Risk Factor of b and a Risk Factor of the same RC/T that do not belong to b. The aggregation is performed under the assumption that each Risk Factor can be modelled as the sum of three Gaussian random variables, one for the Bucket common factor, one for the RC/T common factor, and a noise. The aggregation is: SES = max 2 RC/T ρ b C i + ρ Bucket b C i 2 + 1 ρ b RC/T 2 ρ b Bucket 2 C i 2 b=1 N i b b=1 N i b b=1 N i b Where for each bucket b, ρ b Bucket r b Bucket ρ b RC/T rb RC/T 0%, 1 ρj Bucket 2 All Rights Reserved 2018 CAPTEO 6

Regulators might want to mandate the correlation ranges until the own assessment of correlation ranges have been subject to internal or external review. In that context, the suggested ranges are: r Bucket b = 0%, 100% for NMRF identical (except for granularity) to a Standard Approach Risk Factor, r Bucket b = 50%, 100% for NMRF which are residual from the projection of a Standard Approach Risk Factor, r Bucket b = 100%, 100% for all others, RC/T r b = 100%, 100% The calculation of the maximum prescribed by the formula above can be produced in a separate system from this used for the production of ES, and is not challenging to implement (8). The underlying idea is that the Risk Factor i which belongs to bucket b is modelled as: RF i = K C i ρ 1 B b + ρ 2 RCT + 1 ρ 1 2 ρ 2 2 ε i Where B b, RCT, ε i ~N 0,1 and K is the coefficient that converts an ES equivalent into a volatility equivalent. A volatility for the sum of all Risk Factors in each RC/T can be calculated, maximized and converted back into an ES equivalent (K disappears from the whole calculation). 8 A genetic algorithm would be easy to implement and efficient; brute force methods could be used as well. All Rights Reserved 2018 CAPTEO 7

1.4. Proposed wording Stresses associated to Non Modellable Risk Factors will be aggregated separately for each Risk Class and Risk Type under the worst credible correlation structure. The estimate of the worst credible correlation structure will meet the following: i. Correlation ranges are assessed with a 90% confidence, ii. In estimating correlation ranges, only Risk Factors which are sufficiently similar will be used to produce correlation ranges, iii. The correlation structure allows for at least one explanatory variable per Bucket (as defined in Standard Approach) and one per Risk Type/Class. 1.5. Benefits of the method We believe that this method would both allow a prudent calculation of the capital addon for NMRFs and render the charge more risk sensitive. Further benefits include: i. That method is clearly conservative: It doesn t assume a correlation structure, but aggregates under a worst possible correlation structure, ii. The method is adjustable: it is possible to increase or decrease the charge by requiring a higher/lower degree of confidence in the correlation estimates, or by requiring estimates to be calculated with a longer liquidity horizon or on a shorter stressed period (9), iii. It incentivizes the collection of high quality data, as better data would reduce the range of estimated correlations, iv. It would strengthen the bank s modelling and increase the scrutiny of the correlation of the least observable Risk Factors. 9 A shorter period or a longer horizon both reduce the amount of data available for the calculation of correlations, and directly widens the range of the confidence interval. Below a graph of the standard deviation of measured correlations as a function of the number of independent observation points and the existence or not of 10/1 returns with high volatilities. All Rights Reserved 2018 CAPTEO 8

2. Proposal on modellability assessment The consultative paper puts forward a bucketing method, which relies on fixed buckets. It is the industry as well as our opinion that fixed buckets are not the best approach, as those buckets might be more or less granular than is actually required, depending on the Risk Factors and on the bank s trading strategies. A change in bucketting in major curves would also constitute a material model change, so bucketting could not easily be adapted to market or trading patterns. While most the industry considers that allowing banks to define their own buckets is a suitable alternative solution, we understand that regulators might be reluctant to allow such a method, as it might not be stable (10 ), might be difficult to compare across banks, and might be overly reliant on firm s modelling choices. We therefore suggest a solution which uses a similar rationale but is more stable, easier to compare between banks, and which would give more leeway for regulators to set industry-wide guidance and firm-specific requirements where needed. Where bucketting assumes that there are discrete and nonoverlapping set of Risk Factors that should be assessed together, we suggest to use proxy-observation, where each Risk Factor is allowed to proxy-observe other Risk Factors that are extremely similar to it. For instance, each Risk Factor may be used to proxy-observe all Risk Factors which have a greater than 98% correlation with it. Example: Interest Rate curve Bucketting: When a trade is observed in the ]3Y, 5Y] range, this trade counts towards the assessment of modellability of all Risk Factors in that ]3Y, 5Y] range. Proxy-observation: When a trade is observed on the 4.5Y tenor, this trade counts towards the assessment of modellability of all Risk Factors that are sufficiently correlated to that point, for instance 4Y to 6Y. It doesn t observe the 3Y which is too different, but observes the 6Y point, in another bucket, which is sufficiently similar. Benefits: o Stability of the method. That stability better allows internal and external auditors to challenge assumptions. o Comparison between banks is easier (which RF are proxyobserved by which other RF) since there is no boundary effect. 10 For firms to use their own buckets, these buckets would need to be assessed from correlation between Risk Factors or other quantitative methods. Because all Risk Factors need to belong to a bucket, and some Risk Factors will move in and out of buckets, those buckets could be unstable. For instance if the 2Y tenor drops out of the [0Y, 2Y] bucket, it will have a domino effect and change the definition of all buckets, and possibly the number of buckets. It will also affect modellability assessment and could have a significant impact on the NMRF charge. All Rights Reserved 2018 CAPTEO 9

3. Calibration of DRC correlations Section 186-b requires that correlations must be based on data covering a period of 10 years that includes a period of stress as defined in paragraph 181(d), and based on a oneyear liquidity horizon. As indicated above (1-a), the use of overlapping returns do no materially improve the accuracy of calculated correlations (11). The 95% confidence interval for the measure between two uncorrelated series over 11 years and using 1 year overlapping returns is [ -42%, 42% ]. The interval for two series with a 50% correlation between one another is [ 12.5%, 76.5% ]. The interval for two series with a 80% correlation between one another is [ 60%, 92% ]. We suggest that the requirement to use a one-year liquidity horizon when calculating correlations be removed. Firms may consider the effect of using Liquidity Horizon that is too short when calibrating correlations, but merely to eliminate bias (when the average correlations vary depending on liquidity horizon). Our experience also shows that longer Liquidity Horizon tend to produce lower correlation levels (12) and that most bank s DRC increase with correlation, so a longer Liquidity Horizon tend be less conservative. It is a real possibility that correlations be severely misestimated on some key pairs of obligors, and that it leads to an erroneous assessment of required capital. Graph: Distribution of measured correlation on two series of 2 749 returns with correlations of 80%/50%/0% when measuring the correlation on 2500 sums of 250 consecutive returns. This simulates 1Y overlapping returns, with 10Y of return history (so 11Y of total history) 11 Specifically, using 10 years (plus 1 year of preceding data) of overlapping returns of 1 year each (250 days), the standard deviation of the measured correlation, despite the 2750 points of underlying data is only moderately betted than that when using 11 returns: 25.27% vs 31.36% (measured when simulating uncorrelated series). 12 After an initial increase which reflects the elimination of noise caused by closing and different closing times. All Rights Reserved 2018 CAPTEO 10

11 Avenue de l Opéra 75 001 PARIS glethu@capteo.com Partner Risk & Finance Tel : +33 677 566 922 Web site : www.capteo.com