Parameterization and Calibration of Actuarial Models Paul Kneuer 2008 Enterprise Risk Management Seminar Tuesday, April 15, 11:45 a.m. - 1:00 p.m.

Similar documents
Reserving Risk and Solvency II

A Stochastic Reserving Today (Beyond Bootstrap)

Reserve Risk Modelling: Theoretical and Practical Aspects

WC-5 Just How Credible Is That Employer? Exploring GLMs and Multilevel Modeling for NCCI s Excess Loss Factor Methodology

FINANCIAL SIMULATION MODELS IN GENERAL INSURANCE

The Real World: Dealing With Parameter Risk. Alice Underwood Senior Vice President, Willis Re March 29, 2007

Developing a reserve range, from theory to practice. CAS Spring Meeting 22 May 2013 Vancouver, British Columbia

UPDATED IAA EDUCATION SYLLABUS

From Financial Engineering to Risk Management. Radu Tunaru University of Kent, UK

Stochastic Loss Reserving with Bayesian MCMC Models Revised March 31

The Role of ERM in Reinsurance Decisions

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is:

HANDBOOK OF. Market Risk CHRISTIAN SZYLAR WILEY

Stochastic Claims Reserving _ Methods in Insurance

Stochastic Analysis Of Long Term Multiple-Decrement Contracts

Obtaining Predictive Distributions for Reserves Which Incorporate Expert Opinions R. Verrall A. Estimation of Policy Liabilities

TABLE OF CONTENTS - VOLUME 2

Incorporating Model Error into the Actuary s Estimate of Uncertainty

Cambridge University Press Risk Modelling in General Insurance: From Principles to Practice Roger J. Gray and Susan M.

Statistical Models and Methods for Financial Markets

DRAFT 2011 Exam 7 Advanced Techniques in Unpaid Claim Estimation, Insurance Company Valuation, and Enterprise Risk Management

Pricing Excess of Loss Treaty with Loss Sensitive Features: An Exposure Rating Approach

ECONOMIC CAPITAL MODELING CARe Seminar JUNE 2016

Integration & Aggregation in Risk Management: An Insurance Perspective

A Markov Chain Monte Carlo Approach to Estimate the Risks of Extremely Large Insurance Claims

Fatness of Tails in Risk Models

[D7] PROBABILITY DISTRIBUTION OF OUTSTANDING LIABILITY FROM INDIVIDUAL PAYMENTS DATA Contributed by T S Wright

Stochastic reserving using Bayesian models can it add value?

Calibration of Interest Rates

Reinsurance Pricing Basics

Economic Capital. Implementing an Internal Model for. Economic Capital ACTUARIAL SERVICES

Economic Capital Based on Stress Testing

The Retrospective Testing of Stochastic Loss Reserve Models. Glenn Meyers, FCAS, MAAA, CERA, Ph.D. ISO Innovative Analytics. and. Peng Shi, ASA, Ph.D.

SYLLABUS OF BASIC EDUCATION SPRING 2018 Construction and Evaluation of Actuarial Models Exam 4

Why Pooling Works. CAJPA Spring Mujtaba Datoo Actuarial Practice Leader, Public Entities Aon Global Risk Consulting

The Leveled Chain Ladder Model. for Stochastic Loss Reserving

CS 361: Probability & Statistics

An Actuarial Evaluation of the Insurance Limits Buying Decision

GI ADV Model Solutions Fall 2016

1. You are given the following information about a stationary AR(2) model:

Validation of Internal Models

Changes to Exams FM/2, M and C/4 for the May 2007 Administration

Clark. Outside of a few technical sections, this is a very process-oriented paper. Practice problems are key!

A First Course in Probability

GN47: Stochastic Modelling of Economic Risks in Life Insurance

Maximum Likelihood Estimation

درس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی

SOCIETY OF ACTUARIES Advanced Topics in General Insurance. Exam GIADV. Date: Thursday, May 1, 2014 Time: 2:00 p.m. 4:15 p.m.

Subject CS1 Actuarial Statistics 1 Core Principles. Syllabus. for the 2019 exams. 1 June 2018

CARe Seminar on Reinsurance - Loss Sensitive Treaty Features. June 6, 2011 Matthew Dobrin, FCAS

Financial Models with Levy Processes and Volatility Clustering

Economic Capital: Recent Market Trends and Best Practices for Implementation

Small Sample Performance of Instrumental Variables Probit Estimators: A Monte Carlo Investigation

Posterior Inference. , where should we start? Consider the following computational procedure: 1. draw samples. 2. convert. 3. compute properties

SYLLABUS OF BASIC EDUCATION 2018 Estimation of Policy Liabilities, Insurance Company Valuation, and Enterprise Risk Management Exam 7

Institute of Actuaries of India Subject CT6 Statistical Methods

RISK ADJUSTMENT FOR LOSS RESERVING BY A COST OF CAPITAL TECHNIQUE

Proxies. Glenn Meyers, FCAS, MAAA, Ph.D. Chief Actuary, ISO Innovative Analytics Presented at the ASTIN Colloquium June 4, 2009

Market Risk Analysis Volume I

Outline. Review Continuation of exercises from last time

Casualty Catastrophes Asia Pacific Insurance Conference October 2017

MEASURING PORTFOLIO RISKS USING CONDITIONAL COPULA-AR-GARCH MODEL

Ways of Estimating Extreme Percentiles for Capital Purposes. This is the framework we re discussing

Phylogenetic comparative biology

Evidence from Large Indemnity and Medical Triangles

Application of MCMC Algorithm in Interest Rate Modeling

Exam 7 High-Level Summaries 2018 Sitting. Stephen Roll, FCAS

Enterprise Risk Management Economic Capital Modleing and the Financial Crisis

Extracting Information from the Markets: A Bayesian Approach

Individual Claims Reserving with Stan

Subject CS2A Risk Modelling and Survival Analysis Core Principles

Statistical Modeling Techniques for Reserve Ranges: A Simulation Approach

A Comprehensive, Non-Aggregated, Stochastic Approach to. Loss Development

PROBABILITY. Wiley. With Applications and R ROBERT P. DOBROW. Department of Mathematics. Carleton College Northfield, MN

Financial Risk Modelling for Insurers

Reinsurance Optimization GIE- AXA 06/07/2010

Content Added to the Updated IAA Education Syllabus

Internal Model Industry Forum (IMIF) Workstream G: Dependencies and Diversification. 2 February Jonathan Bilbul Russell Ward

Homeowners Ratemaking Revisited

The Fundamentals of Reserve Variability: From Methods to Models Central States Actuarial Forum August 26-27, 2010

Loss Simulation Model Testing and Enhancement

Using Fat Tails to Model Gray Swans

Session 55 PD, Individual Life Mortality Experience Study Results. Moderator: Cynthia MacDonald, FSA, CFA, MAAA

Assessing volatility and credibility of experience a comparison of approaches

Alternative VaR Models

Market Risk Analysis Volume II. Practical Financial Econometrics

Chapter 7: Estimation Sections

Economic Capital in a Canadian Context

Validating the Double Chain Ladder Stochastic Claims Reserving Model

By-Peril Deductible Factors

Monitoring Accrual and Events in a Time-to-Event Endpoint Trial. BASS November 2, 2015 Jeff Palmer

Curriculum. Written by Administrator Sunday, 03 February :33 - Last Updated Friday, 28 June :10 1 / 10

Gov 2001: Section 5. I. A Normal Example II. Uncertainty. Gov Spring 2010

Exam STAM Practice Exam #1

Master s in Financial Engineering Foundations of Buy-Side Finance: Quantitative Risk and Portfolio Management. > Teaching > Courses

Session 5. A brief introduction to Predictive Modeling

Optimal Layers for Catastrophe Reinsurance

RBC Easy as 1,2,3. David Menezes 8 October 2014

Introduction to Sequential Monte Carlo Methods

Alexander Marianski August IFRS 9: Probably Weighted and Biased?

Transcription:

Parameterization and Calibration of Actuarial Models Paul Kneuer 2008 Enterprise Risk Management Seminar Tuesday, April 15, 11:45 a.m. - 1:00 p.m. ERM Symposium Paul Kneuer April 15, 2008

Parametizing Models: Volatility Measures ERM Symposium Paul Kneuer April 15, 2008

Parametizing Models: Volatility Measures The bad news: The good news: It s hard It s impossible 08_Parametizing_PJK_Speech.ppt 3

An integrated and dynamic view of the volatility of transactions would be a powerful insight: Overall portfolio capital needs, real time Relative capital usage of transactions Marginal cost pricing 08_Parametizing_PJK_Speech.ppt 4

Unfortunately, data is a poor source for the volatility of the next period. Past performance is not a promise of future returns. In real world situations, what we don t know that we don t know can have more cost (and value) than what we do know. It s hard. 08_Parametizing_PJK_Speech.ppt 5

A simple model: Define a loss process so that it is a Poisson (counting) function: frequency, not severity. Look at an historical period and estimate the Poisson mean. Project the distribution of counts next year. Measure the value as E(X) + R x SD(X) Contract Value 6.0 5.0 4.0 3.0 2.0 1.0 0.0 08_Parametizing_PJK_Speech.ppt 1 2 3 4 5 Known Frequency Risk Charges: 15% 20% 25% 30% 6

Now, acknowledge that the data from last year is only sample from a distribution that we can never know. Prior Mean Observation Exposure?? 08_Parametizing_PJK_Speech.ppt 7

The Value of Next Year s Contract Reflects the Risk that We have a Poor Estimate of Last Year s Mean The value of parameter risk is proportional to: C.V. of uncertainty around prior mean Square root of annual observed claims Correlation between this risk and overall market Market premium risk charges Square root of 1/experience period (in years) Relative Increase in Value (Value of Parameter Risk/Expected Loss) 350% 3.5 300% 3.0 250% 2.5 200% 2.0 150% 1.5 100% 1.0 0.5 50% 0.0 0.0 1 2 3 4 5 Observed Frequency CV of Prior Gamma: 0.5 1 2 4 8 16 08_Parametizing_PJK_Speech.ppt 8

Required Reading The Black Swan by Nassim Taleb In many areas, the next data observations can be so discontinuous that it invalidates the form of distribution you would have chosen. For the population of ERM Symposium attendees, would you bet on the relative increase in the overall average size following the next arrival, based on: Height? Career experience? Journal citations? Net worth? 08_Parametizing_PJK_Speech.ppt 9

Black Swans Maximum swing to average from next observation: Observation Swing to Average Aaron Gray (7 Center for Bulls) < ¼ inch part of six feet, 0.3% Charles Hewitt (FCAS, 1951) < 1 week part of 20 years, 0.1% Merton Miller (U. of C. Nobelist) Bill Gates (Harvard Drop-out) > 100 part of < 5, 20x > $100Mn part of < $1Mn, 100x Fields of analysis exposed to black swans cannot be approximated with small, unbiased, normal errors. It s impossible 08_Parametizing_PJK_Speech.ppt 10

Tools Invalidated by Black Swans: Volatility Measures Sharpe ratio CAPM APM Black Scholes Duration immunization VAR TVAR Kreps pricing Mango Rhum ordering RBC BCAR DFA Chain ladder confidence intervals Experience rating credibility 08_Parametizing_PJK_Speech.ppt 11

Alternatives to Transactional Volatility in ERM Scenario testing Maxi min (When am I the best off if the worst happens?) Contract aggregate management Natural hedging (Go short on what you are long of) 08_Parametizing_PJK_Speech.ppt 12

P&C Industry Black Swans WHAT? WHEN? WHY A SURPRISE? Katrina 2005 Levee failure in a windstorm Hurricane frequency cycles 2000s, 1960s, 1930s Short memories Enron/Andersen, etc. 2003 Clash of D&O and E&O; Clash across firms 9/11 Attacks 2001 Foreign Terror in U.S.; Clash of Property, Liability, Life, WC and Aviation Soft Casualty market 1998 2001 Cycles effect coverage, reserving and price monitoring, not just rates Tobacco Liability settlements Late 1990s Government warning does not pre-empt manufacturers duties Northridge earthquake 1994 Unmapped fault Mold 1990s Excluded physical damage collected as water damage or BI liability LMX spiral Early 1990s Higher layers exposed when same amount counted again Construction defects 1980s 1990s Damage to own work exclusion bypassed 08_Parametizing_PJK_Speech.ppt 13

P&C Industry Black Swans (cont d) WHAT? WHEN? WHY A SURPRISE? Piper Alpha 1988 Multiple insureds and multiple limits at one rig Widespread reinsurance uncollectibles 1980s Not Cat driven Repetitive stress injuries in WC 1980s Neither accident nor illness European windstorms 1987 Short memories Superfund Early 1980s First party clean up costs covered as third party liability Tenerife runway crash 1977 Collision causes clash of limits Products coverage for asbestos 1970s (BI), 1990s (PD) Workers not covered as WC; Clean up costs as liability Pharmaceutical class actions 1960s Expansion of batch clause concept 08_Parametizing_PJK_Speech.ppt 14

Value of a Contract (Assuming No Parameter Risk) Risk Charge Factor Known Frequency 0.15 0.20 0.25 0.30 1 1.150 1.200 1.250 1.300 2 2.212 2.283 2.354 2.424 3 3.260 3.346 3.433 3.520 4 4.300 4.400 4.500 4.600 5 5.335 5.447 5.559 5.671 10 10.474 10.632 10.791 10.949 50 51.061 51.414 51.768 52.121 08_Parametizing_PJK_Speech.ppt 15

Increase in Value from Parameter Risk Observed Frequency 1 2 3 4 5 10 50 0.50 0.024 0.032 0.037 0.041 0.045 0.055 0.076 0.130 0.147 0.174 Result is shown relative to expected losses Increase in Value from Parameter Risk 1.00 0.083 0.104 0.115 0.124 Prior Gamma's Coefficient of Variation 2.00 0.247 0.283 0.301 0.312 0.320 0.342 0.373 4.00 0.625 0.671 0.693 0.706 0.716 0.739 0.772 8.00 1.412 1.465 1.489 1.503 1.513 1.538 1.572 16.00 3.006 3.062 3.087 3.102 3.112 3.137 3.172 08_Parametizing_PJK_Speech.ppt 16

For Comments or Question Paul J. Kneuer Holborn Corporation Wall Street Plaza New York, NY 10005 212-797-2285 paulk@holborn.com 08_Parametizing_PJK_Speech.ppt Please leave business card if interested in formal paper of results shown today. 17

Parameterization and Calibration of Actuarial Models Eric Sandberg 2008 Enterprise Risk Management Seminar Tuesday, April 15, 11:45 a.m. - 1:00 p.m. 1

Underwriting Risk Shocks for Economic Capital: Two Approaches for Determining the 99.5 th Percentile Shock for Parameter Risk March 2008

Methods for Determining UR Parameter Shocks Two potential methods for determining the underwriting risk (UR) parameter shocks are discussed here: Method A Calculate the best-fit trend line and variance for historical claims experience (using the A/E in each calendar year for your data points.) Using the variance for the experience, calculate an additional year of experience equal to the 99.5% percentile A/E and calculate a new best-fit trend line with this extra data point. The difference between the original and new best-fit trend lines is the UR parameter shock. Method B Use credibility/statistical techniques to determine the 99.5% confidence interval for the historical claims experience. The UR parameter shock is equal to the difference between the 99.5% confidence interval and the mean of the experience. 3

Simplified Example Assumptions An example to illustrate the methodology and results of these two methods is included in the following slides. The assumptions used for this simplified example are: Secular improvement is 0 (i.e., the trend is assumed to be 0) The number of deaths in a single year follows a binomial distribution this examples uses A/E based on lives vs. amounts 10 years of historical claims experience is available (summary below) Year 1 Year 2 Year 3 Year 4 Year 5 Year 6 Year 7 Year 8 Year 9 Year 10 Total Single Year A/E Claims 101% 101 91% 91 86% 86 105% 105 108% 108 95% 95 117% 117 99% 99 90% 90 107% 107 100% 1,000 Cumulative A/E Claims 101% 101 96% 192 93% 278 96% 383 98% 491 98% 586 101% 704 100% 803 99% 893 100% 1,000 Actual/Expected 130% 120% 110% 100% 90% 80% 70% Year 1 Year 2 Year 3 Year 4 Year 5 Year 6 Year 7 Year 8 Year 9 Year 10 Single Year A/E Cumulative A/E 4

Simplified Example Method A Step 1 Calculate best-fit line and mean & variance for historical A/E s Actual/Expected 130% 120% 110% 100% 90% 80% 70% Yr 1 Yr 2 Yr 3 Yr 4 Yr 5 Yr 6 Yr 7 Yr 8 Yr 9 Yr 10 Single Year A/E Best-Fit Line Year 1 Year 2 Year 3 Year 4 Year 5 Year 6 Year 7 Year 8 Year 9 Year 10 Mean Variance Std Dev Single Year A/E 101% 91% 86% 105% 108% 95% 117% 99% 90% 107% 100% 0.9% 9.6% Step 2 Calculate an additional year of experience equal to 99.5 th percentile A/E 99.5 th percentile A/E = µ + 2.58*StdDev = 100% + 2.58*9.6% = 124.8% 5

Simplified Example Method A Step 3 Add the additional data point (i.e. the 99.5 th percentile A/E) and calculate the new best-fit line 130% Actual/Expected 120% 110% 100% 90% 80% 70% Year 1 Year 2 Year 3 Year 4 Year 5 Yea r 6 Year 7 Year 8 Year 9 Year 10 Year 11 Single Year A/E Original Best Fit Line New Best Fit Line Original Best Fit Line = 100.0% A/E New Best Fit Line = 102.2% A/E Result The 99.5 th percentile UR parameter shock is 2.2% Calculated as the difference between the Original and New best fit lines 6

Simplified Example Method B Step 1 Calculate a 99.5% confidence interval around the aggregate A/E Note that the confidence interval shrinks as more years of experience are used 130% Single Year A/E Cumulative A/E 99.5% CI for Cum A/E 120% Year 1 101% 101% +/- 25.7% Actual/Expected 110% 100% 90% Cumulative A/E Lower 99.5% CI Upper 99.5% CI Year 2 Year 3 Year 4 Year 5 Year 6 91% 86% 105% 108% 95% 96% 93% 96% 98% 98% +/- 18.2% +/- 14.9% +/- 12.9% +/- 11.5% +/- 10.5% 80% Year 7 Year 8 117% 99% 101% 100% +/- 9.7% +/- 9.1% 70% Yr 1 Yr 2 Yr 3 Yr 4 Yr 5 Yr 6 Yr 7 Yr 8 Yr 9 Yr 10 Year 9 Year 10 90% 107% 99% 100% +/- 8.6% +/- 8.1% Result The 99.5 th percentile UR parameter shock is 8.1% Based on the 99.5% confidence interval for the experience A/E 7

Comments on Method A Strengths Intuitive - magnitude of shock decreases as more data points are added Consistent with how an acquiring party might interpret our experience after a shock event Weaknesses Doesn t take into account the credibility for each data point All A/E s given the same weight regardless of number of claims Can produce very strange results when there are only a few data points Using too many data points can be an issue since it may dampen the size of the shock too much AEGON limits to using 10 years of data 8

Comments on Method B Strengths Intuitive - magnitude of shock decreases as more data points are added Takes into account the credibility for each data point and the experience set as a whole Can be used with 1 year of experience data or 20+ Weaknesses May not be consistent with how an acquiring party might interpret our experience after a shock event 9

Parameterisation and Calibration of Actuarial Models AJ Czernuszewicz PhD FIA PD England PhD 2008 Enterprise Risk Management Seminar Tuesday April 15, 11:45 am - 1:00 pm

Parameterisation and Calibration of Actuarial Models A.J. Czernuszewicz and P.D. England EMB Consultancy LLP, Saddlers Court, 64-74 East Street, Epsom, KT17 1HB. peter.england@emb.co.uk andrzej@emb.co.uk http://emb.co.uk Abstract. In the underwriting risk component of standard simulation based capital models, parameters are usually obtained by fitting distributions to data, using maximum likelihood techniques. The parameters are usually then treated as fixed and known. In this presentation, we consider extending the methodology using Markov chain Monte Carlo techniques, commonly used in Bayesian statistics, to obtain distributions of parameters. Furthermore, the underlying data are often not known with certainty, but are themselves partially estimated, and it is uncommon to allow for this data uncertainty. We present a way to incorporate data uncertainty into the parameter estimation process. Model uncertainty is also considered simply by observing the effect on the results of using different distributions. The methodology is illustrated with an example that fits a distribution to a history of loss ratios, and using the parameters obtained to forecast the aggregate claims in a new underwriting year. Stochastic reserving techniques are used to incorporate data uncertainty in the loss ratios. The price of a stop loss reinsurance applied to the aggregate distribution is then estimated assuming the parameters and data are known, and compared to the price that would be estimated if parameter uncertainty were considered, and also if parameter and data uncertainty were considered. We demonstrate that capital requirements and reinsurance prices are likely to be underestimated when parameter uncertainty and data uncertainty are not considered. Keywords. Bayesian, Dynamic Financial Analysis, Internal Capital Models, Markov chain Monte Carlo, Parameter Uncertainty, Pricing, Stochastic Reserving. 1

Parameterisation and Calibration of Actuarial Models AJ Czernuszewicz PhD FIA PD England PhD 2008 Enterprise Risk Management Seminar Tuesday April 15, 11:45 am - 1:00 pm

Agenda Description of a standard approach for nonlife insurance risk Including a simple example Taking account of parameter uncertainty Extending the example An approach to including data uncertainty Extending the example further! 2

Non-Life Insurance Risk A Standard Approach Prior year liabilities (Reserve Risk by Line of Business) Distributions of gross outstanding liabilities obtained using bootstrapping (or Bayesian approaches) Net down using a simple methodology Note: Parameter uncertainty is taken into account New business (Underwriting Risk by Line of Business) Exposure/Premium given by Business Plan assumptions Claims split between Attritional, Large and Catastrophe Attritional claims simulated in aggregate Large claims simulated individually Catastrophe claims simulated by event Net down using the actual reinsurance programme Note: Parameter uncertainty is not usually taken into account 3

New Business A Simplified Example Do not split between Attritional, Large and Catastrophe Just simulate all claims in aggregate Given a 10 year history of premium and claims information: Inflation and rate adjust the premiums Inflation adjust the claims Create a history of loss ratios Fit a Log-normal distribution using maximum likelihood Given a Gross Premium estimate: Simulate a forecast loss ratio from a Log-normal distribution, using the parameters obtained above Multiply by the premium estimate to give an aggregate claims distribution 4

Loss Ratios (Rate and Inflation adjusted) by Year 100% Loss Ratio 95% 90% 85% 80% 75% 70% 65% 60% 55% Expected Year Loss Ratio 1998 76% 1999 79% 2000 78% 2001 85% 2002 83% 2003 85% 2004 71% 2005 76% 2006 76% 2007 69% 2008? 50% 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 Year

6 MLE Parameters m = -0.253 s = 0.066

Stop Loss 10% xs 85% Price E(Recovery) + 25% * SD(Recovery) $233

350 300 250 200 150 100 50 0 0 50 100 150 200 250 300 350 Range The Story So Far The data and parameters are considered fixed and known Data Parameters Forecast Density: Aggregate Losses Historical Data / Judgement m s Density But the parameters were obtained from a sample of data, so we should take account of the precision of the parameters 8

Quantifying Parameter Uncertainty Classical Statistics Asymptotic distribution of ML estimates Bootstrapping Resample with replacement from the original sample Re-fit model to each sample Gives a (joint) distribution of parameters Bayesian Statistics Use Bayes Theorem to determine posterior distribution of the parameters, given the data and prior beliefs 9

Parameter Uncertainty: Likelihood-based Covariance matrix of parameter estimates is inverse of Fisher Information Matrix 2 1 l ( θ ) Cov( θ) = I ( θ) I( θrs ) = ne θr θ s Standard error of parameters is square root of diagonal elements m s Estimate -0.253 0.066 Std Err 0.021 0.015 10 But how good is an assumption of multivariate normality?

Forecasting with Parameter Uncertainty: A Bayesian Approach Select a distribution for the data Select a prior distribution of the parameters Form posterior distribution by revising the prior in light of the data sample Simulate parameters from the posterior distribution using McMC methods Gives (joint) distribution of parameters Note: If non-informative Uniform priors are used, the posterior log-likelihood is essentially the log-likelihood of the data (plus a constant) Simulate the forecast conditional on the (simulated) parameters 11

Gibbs Sampling Posterior likelihood: Gibbs Sampling: ( θ ) ( θ ) π( θ) f X L X (,, ) k θ f θ θ θ (1) (0) (0) 1 1 2 (,,, ) k θ f θ θ θ θ (1) (1) (0) (0) 2 2 1 3 ( ) 1,, 1, + 1,, θ f θ θ θ θ θ (1) (1) (1) (0) (0) j j j j k ( ) 1,, 1 θ f θ θ θ (1) (1) (1) k k k 12 A generic sampling algorithm (e.g. ARS/ARMS) is used where the conditional distributions cannot be recognised

Gibbs Sampling Initial Parameters (0) θ Iteration 1 (1) θ 1 2 (2) θ 1 3 (3) θ 1.. 10,000 1 θ.. (0) θ (0) 2 θ.. (1) θ (1) 2 θ.. (2) θ (2) 2 k k k Where the joint log-density cannot be factorised, it is treated sequentially as a function of 1 parameter, conditional on the most recent value of all other parameters. 13 Where the log-density is not from a standard distribution, a generic sampling routine is used, such as Adaptive Rejection Metropolis Sampling (ARMS)

ML Estimates m = -0.253 ML Estimates s = 0.066

ML Estimates m = -0.253 s = 0.066

3 5 0 3 0 0 2 5 0 2 0 0 1 5 0 1 0 0 5 0 3 5 0 3 0 0 2 5 0 2 0 0 1 5 0 1 0 0 5 0 D e n s i t y : A g g r e g a t e L o s s e s 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 1 8 0 2 0 0 2 2 0 2 4 0 2 6 0 2 8 0 R a n g e D e n s i t y : A g g r e g a t e L o s s e s 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 1 8 0 2 0 0 2 2 0 2 4 0 2 6 0 2 8 0 R a n g e 350 300 250 200 150 100 50 0 0 50 100 150 200 250 300 350 400 350 300 250 200 150 100 50 0 Range 0 50 100 150 200 250 300 350 Range Incorporating Parameter Uncertainty Excluding parameter uncertainty Density: Aggregate Losses Historical Data / Judgement m s Density Including parameter uncertainty Density: Aggregate Losses Historical Data / Judgement Density Density Density 16

$233 Stop Loss 10% xs 85% $497

Year 1 2 3 4 5 6 7 8 9 10 1998 7,729 28,846 45,003 62,748 75,460 89,604 98,994 106,163 110,978 114,505 1999 9,111 33,697 51,559 66,717 82,816 90,475 95,801 103,245 105,940 2000 6,421 27,266 43,315 66,449 79,884 88,105 91,245 94,959 2001 6,055 26,210 43,261 58,119 68,994 75,261 81,590 2002 4,475 17,318 33,698 45,968 60,533 68,071 2003 4,678 16,836 28,422 42,352 56,817 2004 3,208 10,285 20,994 30,525 2005 2,370 10,043 18,953 Inflation adjusted cumulative claim amounts 2006 2,593 10,484 2007 2,118 Rate and Inflation Adjusted Premium Expected Ultimate Claims Expected Loss Ratio Year 1998 166,673 126,878 76% 1999 152,536 120,857 79% 2000 144,144 112,653 78% 2001 120,365 102,198 85% 2002 111,673 92,231 83% 2003 102,386 86,747 85% 2004 79,816 56,547 71% 2005 64,631 49,172 76% 2006 59,654 45,497 76% 2007 51,742 35,666 69% 2008 50,000?? Data Uncertainty: The expected loss ratios are derived from a forecast of the ultimate claims, which is itself uncertain

Forecast the cumulative claims by origin year Bayesian version of Mack s model, with curve fitting for tail estimation England PD & Verrall RJ (2006) Annals of Actuarial Science, Vol 1 Part II

3 5 0 3 0 0 2 5 0 2 0 0 1 5 0 1 0 0 5 0 3 5 0 3 0 0 2 5 0 2 0 0 1 5 0 1 0 0 5 0 D e n s i t y : A g g r e g a t e L o s s e s 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 1 8 0 2 0 0 2 2 0 2 4 0 2 6 0 2 8 0 R a n g e D e n s i t y : A g g r e g a t e L o s s e s 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 1 8 0 2 0 0 2 2 0 2 4 0 2 6 0 2 8 0 R a n g e 400 350 300 250 200 150 100 50 0 0 50 100 150 200 250 300 350 Range Incorporating Parameter and Data Uncertainty: A Pragmatic Approach Instead of considering the data as fixed and known, use simulated data instead Then obtain a distribution of parameters, conditional on the simulated data Then forecast, conditional on the parameters Density: Aggregate Losses Density Data Density DensitySimulated 20

Without Data Uncertainty With Data Uncertainty

Stop Loss 10% xs 85% $233 $497 $1015

Conclusion Parameter uncertainty is often ignored Statistical methods exist to quantify this uncertainty The impact on model results can be significant It has largest effect with small data sets especially in the tails of distributions Ignoring parameter uncertainty could therefore underestimate capital requirements, and underestimate (re-)insurance prices Ignoring data uncertainty will only make matters worse 23

Parameter Uncertainty FSA Comment We do not think it appropriate to ignore this risk altogether. In particular, informal discussions with market participants suggest that applying parameter uncertainty can have a significant impact on the underlying ICA FSA Insurance Sector Briefing: ICAS one year on (Nov. 2005) 24

25 Reverend Thomas Bayes (1702-1761)

Modern computer simulation techniques open up a wide field of practical applications for risk theory concepts, without the restrictive assumptions, and sophisticated mathematics, of many traditional aspects of risk theory. - Daykin, Pentikainen and Pesonen (1996)

Model Uncertainty Try different distributions For example, using a Gamma(a,b) distribution a b ML Estimate 231.2 0.0034 Std Err 103.6 0.0015 27

Without Data Uncertainty With Data Uncertainty ML Estimates a = 231.2 b = 0.0034

Stop Loss 10% xs 85% $218 $359 $934