Loss Simulation Model Testing and Enhancement

Similar documents
Joseph O. Marker Marker Actuarial Services, LLC and University of Michigan CLRS 2011 Meeting. J. Marker, LSMWP, CLRS 1

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is:

Asymmetric Price Transmission: A Copula Approach

Analyzing Oil Futures with a Dynamic Nelson-Siegel Model

GGraph. Males Only. Premium. Experience. GGraph. Gender. 1 0: R 2 Linear = : R 2 Linear = Page 1

Homework Problems Stat 479

Lecture 5: Fundamentals of Statistical Analysis and Distributions Derived from Normal Distributions

Subject CS1 Actuarial Statistics 1 Core Principles. Syllabus. for the 2019 exams. 1 June 2018

1. You are given the following information about a stationary AR(2) model:

Modeling Co-movements and Tail Dependency in the International Stock Market via Copulae

Cambridge University Press Risk Modelling in General Insurance: From Principles to Practice Roger J. Gray and Susan M.

Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing Examples

Financial Risk Management

Frequency Distribution Models 1- Probability Density Function (PDF)

Commonly Used Distributions

Contents Part I Descriptive Statistics 1 Introduction and Framework Population, Sample, and Observations Variables Quali

ReservePrism Simulation

4-2 Probability Distributions and Probability Density Functions. Figure 4-2 Probability determined from the area under f(x).

Financial Econometrics Notes. Kevin Sheppard University of Oxford

Stochastic Loss Reserving with Bayesian MCMC Models Revised March 31

ก ก ก ก ก ก ก. ก (Food Safety Risk Assessment Workshop) 1 : Fundamental ( ก ( NAC 2010)) 2 3 : Excel and Statistics Simulation Software\

Lecture 3: Probability Distributions (cont d)

Introduction to Statistical Data Analysis II

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER

Practice Exam 1. Loss Amount Number of Losses

Lecture 2. Probability Distributions Theophanis Tsandilas

SYLLABUS OF BASIC EDUCATION SPRING 2018 Construction and Evaluation of Actuarial Models Exam 4

High-Frequency Data Analysis and Market Microstructure [Tsay (2005), chapter 5]

Operational Risk Modeling

FINA 695 Assignment 1 Simon Foucher

Rating Exotic Price Coverage in Crop Revenue Insurance

Exam 2 Spring 2015 Statistics for Applications 4/9/2015

A Stochastic Reserving Today (Beyond Bootstrap)

What was in the last lecture?

Homework Problems Stat 479

ME3620. Theory of Engineering Experimentation. Spring Chapter III. Random Variables and Probability Distributions.

Fitting parametric distributions using R: the fitdistrplus package

Bivariate Birnbaum-Saunders Distribution

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2009, Mr. Ruey S. Tsay. Solutions to Final Exam

Dependent Loss Reserving Using Copulas

Topic 8: Model Diagnostics

continuous rv Note for a legitimate pdf, we have f (x) 0 and f (x)dx = 1. For a continuous rv, P(X = c) = c f (x)dx = 0, hence

Maximum Likelihood Estimates for Alpha and Beta With Zero SAIDI Days

A Copula-GARCH Model of Conditional Dependencies: Estimating Tehran Market Stock. Exchange Value-at-Risk

Operational Risk Aggregation

Statistics & Flood Frequency Chapter 3. Dr. Philip B. Bedient

A potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples

Probability Weighted Moments. Andrew Smith

Describing Uncertain Variables

Statistical Tables Compiled by Alan J. Terry

STRESS-STRENGTH RELIABILITY ESTIMATION

Lecture 8: Markov and Regime

Regression and Simulation

A UNIFIED APPROACH FOR PROBABILITY DISTRIBUTION FITTING WITH FITDISTRPLUS

Tail Risk, Systemic Risk and Copulas

Lecture 9: Markov and Regime

XLSTAT TIP SHEET FOR BUSINESS STATISTICS CENGAGE LEARNING

ROM SIMULATION Exact Moment Simulation using Random Orthogonal Matrices

SOCIETY OF ACTUARIES EXAM STAM SHORT-TERM ACTUARIAL MATHEMATICS EXAM STAM SAMPLE QUESTIONS

ECE 340 Probabilistic Methods in Engineering M/W 3-4:15. Lecture 10: Continuous RV Families. Prof. Vince Calhoun

Converting to the Standard Normal rv: Exponential PDF and CDF for x 0 Chapter 7: expected value of x

Exam STAM Practice Exam #1

Chapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi

Financial Econometrics (FinMetrics04) Time-series Statistics Concepts Exploratory Data Analysis Testing for Normality Empirical VaR

IEOR 165 Lecture 1 Probability Review

Actuarial Society of India EXAMINATIONS

ASSIGNMENT - 1, MAY M.Sc. (PREVIOUS) FIRST YEAR DEGREE STATISTICS. Maximum : 20 MARKS Answer ALL questions.

Idiosyncratic risk, insurance, and aggregate consumption dynamics: a likelihood perspective

On modelling of electricity spot price

Copulas? What copulas? R. Chicheportiche & J.P. Bouchaud, CFM

Modeling dynamic diurnal patterns in high frequency financial data

Operational Risk Aggregation

UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions.

A Robust Test for Normality

Master s in Financial Engineering Foundations of Buy-Side Finance: Quantitative Risk and Portfolio Management. > Teaching > Courses

Lecture Note: Analysis of Financial Time Series Spring 2017, Ruey S. Tsay

2. Copula Methods Background

yuimagui: A graphical user interface for the yuima package. User Guide yuimagui v1.0

Confidence Intervals for an Exponential Lifetime Percentile

SOCIETY OF ACTUARIES Advanced Topics in General Insurance. Exam GIADV. Date: Thursday, May 1, 2014 Time: 2:00 p.m. 4:15 p.m.

Stochastic Models. Statistics. Walt Pohl. February 28, Department of Business Administration

P VaR0.01 (X) > 2 VaR 0.01 (X). (10 p) Problem 4

Notice that X2 and Y2 are skewed. Taking the SQRT of Y2 reduces the skewness greatly.

Statistics 431 Spring 2007 P. Shaman. Preliminaries

MVE051/MSG Lecture 7

4-1. Chapter 4. Commonly Used Distributions by The McGraw-Hill Companies, Inc. All rights reserved.

Clark. Outside of a few technical sections, this is a very process-oriented paper. Practice problems are key!

ARCH and GARCH models

APPROACHES TO VALIDATING METHODOLOGIES AND MODELS WITH INSURANCE APPLICATIONS

Business Statistics 41000: Probability 3

Financial Models with Levy Processes and Volatility Clustering

The histogram should resemble the uniform density, the mean should be close to 0.5, and the standard deviation should be close to 1/ 12 =

EVA Tutorial #1 BLOCK MAXIMA APPROACH IN HYDROLOGIC/CLIMATE APPLICATIONS. Rick Katz

Maximum Likelihood Estimation

Dependence Structure between TOURISM and TRANS Sector Indices of the Stock Exchange of Thailand

Analysis of truncated data with application to the operational risk estimation

Market Volatility and Risk Proxies

A New Hybrid Estimation Method for the Generalized Pareto Distribution

John Hull, Risk Management and Financial Institutions, 4th Edition

Financial Econometrics

Transcription:

Loss Simulation Model Testing and Enhancement Casualty Loss Reserve Seminar By Kailan Shang Sept. 2011

Agenda Research Overview Model Testing Real Data Model Enhancement Further Development Enterprise Risk Management 2

I. Research Overview 3

Background Why use the LSM Reserving is a challenging task which requires a lot of judgements on assumption setting The loss simulation model (LSM) is a tool created by the CAS Loss Simulation Model Working Party (LSMWP) to generate claims that can be used to test loss reserving methods and models It helps us understand the impact of assumptions on reserving from a different perspective distribution based on simulations that resemble the real experience In addition, stochastic reserving is also a popular trend. Enterprise Risk Management 4

Background How to use the LSM Real Claim Data and Reserve Data fit into statistical models frequency severity trend state Loss Simulation Model run simulations Stochastic claim and reserve data Choose the best reserve method Compare against the simulated claim data reserve distributions Apply different reserve methods Pass Test against real experience / model assumption We do not expect an accurate estimation of the claim amount. We are more concerned about the adequacy of our reserve. At what probability that the reserve is expected to be below the final payment? Enterprise Risk Management 5

Background How to use the LSM 0.16 0.14 Claim Distribution vs Reserve Distribution Claim Mean Method B Amount Claim Method A Method B 10 83.5% 73.7% 81.2% 15 95.7% 90.3% 96.7% 20 99.0% 96.6% 99.5% 25 99.8% 98.9% 99.9% 30 99.9% 99.6% 100.0% prob. density function 0.12 0.1 0.08 0.06 0.04 Method A Reserve Method B is good enough? 99.9% percentile of method B < 99.9% percentile of claim Without stochastic analysis, method B might be chosen. The LSM can help you on it! 0.02 0 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 Aggregate Claim One out of hundreds of examples Claim Method A Method B Enterprise Risk Management 6

Overview Test some items suggested but not fully addressed in the CAS LSMWP summary report Modeling Loss Emergence and Settlement Processes Fit real claim data to models. Build two-state regime-switching feature in the LSM to add an extra layer of flexibility to describe claim data. Software: LSM and R. The source code of model testing and model fitting using R is provided. Enterprise Risk Management 7

Model Testing Real Claim Data and Reserve Data fit into statistical models frequency severity trend state Loss Simulation Model run simulations Stochastic claim and reserve data Choose the best reserve method Compare against the simulated claim data reserve distributions Apply different reserve methods Pass Test against real experience / model assumption Test against model assumption Negative binomial frequency distribution Correlation Severity trend Case reserve adequacy distribution Enterprise Risk Management 8

Real Data Model Fitting Real Claim Data and Reserve Data fit into statistical models frequency severity trend state Loss Simulation Model run simulations Stochastic claim and reserve data Choose the best reserve method Compare against the simulated claim data reserve distributions Apply different reserve methods Pass Test against real experience / model assumption Fit real claim data to statistical models frequency Severity Trend Correlation Enterprise Risk Management 9

Model Enhancement Real Claim Data and Reserve Data fit into statistical models frequency severity trend state Loss Simulation Model run simulations Stochastic claim and reserve data Choose the best reserve method Compare against the simulated claim data reserve distributions Apply different reserve methods Pass Test against real experience / model assumption Two-state regime-switching distribution Switch between states at specified probability Each state represents a distinct distribution Enterprise Risk Management 10

II. Model Testing 11

DAY ONE 9 AM Tom, our company plans to use the loss simulation model to help our reserving works. Let s do some tests first to get a better understanding of the model. Start from the frequency model. Boss, where shall we start? Enterprise Risk Management 12

Negative Binomial Frequency Testing Frequency simulation One Line with annual frequency Negative Binomial (size=100, prob.=0.4) Monthly exposure: 1 Frequency Trend: 1 Seasonality: 1 Accident Year: 2000 Random Seed: 16807 No. of Simulations: 1000 Histogram and QQ plot Frequency 0 50 100 150 200 Histogram of observed data R code extract # draw histogram hist(dataf1,main="histogram of observed data") # QQPlot freq.ex<-(rnbinom(n=1000,size=100,prob=0.4)) qqplot(dataf1,freq.ex,main="qq-plot distr. Negative Binomial") abline(0,1) ## a 45-degree reference line is plotted freq.ex 100 120 140 160 180 200 220 QQ-plot distr. Negative Binomial 100 120 140 160 180 200 dataf1 100 120 140 160 180 200 Enterprise dataf1 Risk Management 13

Negative Binomial Frequency Testing Goodness of fit test - Pearson s χ 2 χ 2 p value Pearson 197.4 0.64 Maximum likelihood (ML) estimation size µ Estimation 117.2 144.2 S.D. 9.5 0.57 Model Assumption ML estimation Size 100 117 Prob. 0.4 0.448 Mean (µ) 150 144.2 Variance 375 321.5 R code extract # Goodness of fit test library(vcd) #load package vcd gf<-goodfit(dataf1,type="nbinom",par=list(size=100,prob=0.4)) # Maximum likelihood estimation gf<-goodfit(dataf1,type= "nbinom",method= "ML") fitdistr(dataf1, "Negative Binomial") Enterprise Risk Management 14

DAY ONE 5 PM Good job Tom! Let s get the correlation test done tomorrow. Enterprise Risk Management 15

Correlation Correlation among frequencies of different lines - Gaussian Copula - Clayton Copula - Frank Copula - Gumbel Copula - t Copula Correlation between claim size and report lag - Gaussian Copula - Clayton Copula - Frank Copula - Gumbel Copula - t Copula Use R package copula Enterprise Risk Management 16

Frequencies Frank Copula θu1 θu2 θun n 1 ( e 1)( e 1) ( e 1) Cθ ( u) = ln(1 + θ n 1 θ ( e 1) - U i : marginal cumulative distribution function (CDF) Gumbel Copula: - C(u): joint CDF θ > 0 Frequencies simulation - Two Lines with annual frequency Poisson (λ = 96) - Monthly exposure: 1 - Frequency Trend: 1 - Seasonality: 1 - Accident Year: 2000 - Random Seed: 16807 - Frequency correlation: Θ = 8, n = 2 - # of Simulations: 1000 Test Method - Scatter plot - Goodness-of-fit test 1. Parameter estimation based on maximum likelihood and inverse of Kendall s tau 2. Cramer-von Mises (CvM) statistic S ( k ) n = n i= 1 { C ( k ) n ( U ( k ) i ) C ( k ) θ n ( U ( k ) i )} 2 Enterprise Risk Management 17 3. p value by parametric bootstrapping

Frequencies Frank Copula Scatter plot Frank Copula (Θ=8) Simulated Frequencies Line2 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Goodness-of-fit test - Maximum Likelihood method Parameter estimate(s): 7.51 Std. error: 0.28 CvM statistic: 0.016 with p-value 0.31 - Inversion of Kendall s tau method Parameter estimate(s): 7.54 Std. error: 0.31 CvM statistic: 0.017 with p-value 0.20 Line1 R code extract # construct a Gumbel copula object gumbel.cop <- gumbelcopula(3, dim=2) # parameter estimation fit.gumbel<-fitcopula(gumbel.cop,x,method="ml") fit.gumbel<-fitcopula(gumbel.cop,x,method="itau") #Copula Goodness-of-fit test gofcopula(gumbel.cop, x, N=100, method = "mpl") gofcopula(gumbel.cop, x, N=100, method = "itau") Enterprise Risk Management 18

Claim Size and Report Lag Normal Copula Normal Copula a.k.a. Gaussian Copula: Σ: correlation matrix Φ: normal cumulative distribution function C n ( u) = Φ ( Φ 1 ( u 1 ),, Φ 1 ( u n )) Claim simulation - One Line with annual frequency Poisson (λ = 120) - Monthly exposure: 1 - Frequency Trend: 1.05 - Seasonality: 1 - Accident Year: 2000 - Random Seed: 16807 - Payment Lag: Exponential with rate = 0.00274, which implies a mean of 365 days. - Size of entire loss: Lognormal with µ = 11.17 and σ = 0.83 - Correlation between payment lag and size of loss: normal copula with correlation = 0.85, dimension 2 - # of Simulations: 10 Enterprise Risk Management 19

Claim Size and Report Lag Normal Copula Scatter plot Normal Copula (0.85) Simulated claim size vs. report lag x[,2] 0.0 0.2 0.4 0.6 0.8 1.0 V2 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 x[,1] Goodness-of-fit test - Maximum Likelihood method Parameter estimate(s): 0.83 Std. error: 0.01 CvM statistic: 0.062 with p-value 0.05 - Inversion of Kendall s tau method Parameter estimate(s): 0.85 Std. error: 0.01 CvM statistic: 0.029 with p-value 0.015 0.0 0.2 0.4 0.6 0.8 1.0 V1 Enterprise Risk Management 20

DAY THREE 9 AM We often see trends in our claim data. How is it handled in the simulation model? Enterprise Risk Management 21

Severity Trend The LSM has two ways to model it Trend factor (cum) α (Persistency of the force of the trend) α cum pmt _ date trend = cum _ ) acc date = ( cumacc _ date ) cum acc _ date 1 α ( ( cum pmt _ date Trend factor Test Parameters - One Line with annual frequency Poisson (λ = 96) - Monthly exposure: 1 - Frequency Trend: 1 - Seasonality: 1 - Accident Year: 2000 to 2005 - Random Seed: 16807 - Size of entire loss: Lognormal with µ = 11.17 and σ = 0.83 - Severity trend: 1.5 - # of Simulations: 300 ) α Enterprise Risk Management 22

Severity Trend Trend factor Test - Decomposition of Time Series by Loess (Locally weighted regression) into trend, seasonality, and remainder Mean loss size Decomposition ts1 1e+05 2e+05 3e+05 4e+05 5e+05 6e+05 7e+05 2000 2001 2002 2003 2004 2005 2006 Time time - Time series analysis (linear regression) Log(Mean Loss Size) = Intercept + trend * (time 2000) + error term Coefficients: Estimate Std. Error t value Pr(> t ) (Intercept) 11.034162 0.007526 1466.1 <2e-16 trend 0.405552 0.002196 184.7 <2e-16 Residual standard error: 0.03226 on 70 degrees of freedom Multiple R-squared: 0.998, Adjusted R-squared: 0.9979 F-statistic: 3.412e+04 on 1 and 70 DF, p-value: < 2.2e-16 data seasonal trend remainder 1e+05 3e+05 5e+05 7e+05 1e+05 3e+05 5e+05 2000 2001 2002 2003 2004 2005 2006 exp(0.405552) = 1.50013 vs. model input Enterprise 1.5 Risk Management 23-5000 0 5000-20000 0 20000 R code extract #set up time series ts1<-ts(data,start=2000,frequency=12) plot(ts1) #decomposition plot(stl(ts1,s.window="periodic")) #linear trend fitting trend = time(ts1)-2000 reg = lm(log(ts1)~trend, na.action=null)

Severity Trend Trend persistency α Test Parameters - One Line with annual frequency Poisson (λ = 96) - Monthly exposure: 1 - Frequency Trend: 1 - Seasonality: 1 - Accident Year: 2000 to 2001 - Random Seed: 16807 - Size of entire loss: Lognormal with m = 11.17 and s = 0.83 - Severity trend: 1.5 - Alpha = 0.4 - # of Simulations: 1000 But how do we test it? Choose the loss payments with report date during the 1st month and payment date during the 7th month. 1/12 (1 0.4) 7 /12 0.4 The severity trend is (1.5 ) (1.5 ) 1.122 The expected loss size is 1.122 e 2 11.17+ 0.83 / 2 112,175 Enterprise Risk Management 24

Severity Trend Trend persistency α Test Histogram and fitted pdf Lognormal pdf and histogram QQ plot of severity QQ-plot distr. Lognormal yhist 0e+00 2e-06 4e-06 6e-06 8e-06 Seve.ex 0e+00 2e+05 4e+05 6e+05 8e+05 1e+06 0e+00 1e+05 2e+05 3e+05 4e+05 5e+05 6e+05 xhist 0e+00 1e+05 2e+05 3e+05 4e+05 5e+05 6e+05 a - Maximum likelihood estimation (mean of severity=113,346) meanlog sdlog Estimation 11.32 0.80 Standard Deviation 0.052 0.037 - Normality test of log (severity) Kolmogorov-Smirnov test: p-value = 0.82 Anderson-Darling normality test: p-value = 0.34 R code extract #Kolmogorov-Smirnov Tests ks.test(a,"plnorm", meanlog=11.32, sdlog=0.8) #Anderson-Darling Test library(nortest) ## package loading ad.test(datas1.norm) Enterprise Risk Management 25

DAY FOUR 9 AM I heard you guys plan to use the loss simulation model. Is it capable of modeling case reserve adequacy? Enterprise Risk Management 26

Case Reserve Adequacy In the LSM, the case reserve adequacy (CRA) distribution attempts to model the reserve process by generating case reserve adequacy ratio at each valuation date - Case reserve = generated final claim amount case reserve adequacy ratio Case Reserve Simulation - One Line with annual frequency Poisson (λ = 96) - Monthly exposure: 1 - Frequency Trend: 1 - Seasonality: 1 - Accident Year: 2000 to 2001 - Random Seed: 16807 - Size of entire loss: Lognormal with µ = 11.17 and σ = 0.83 - Severity trend: 1 - P(0) = 0.4 - Est P(0) = 0.4 - # of Simulations: 8 Test 40% time point (60 report date + 40% final payment date) case reserve adequacy ratio 2 0.25+ 0.05 / 2 Mean: e 1. 2856 Enterprise Risk Management 27

Case Reserve Adequacy Case Reserve Adequacy Test Seve.ex 1.2 1.3 1.4 1.5 QQ plot of CRA ratio QQ-plot distr. Lognorm 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 Where went wrong? case reserve is generated on the simulated valuation dates. Linear interpolation method is used to get case reserve ratio at 40% time point. On the report date, a case reserve of 2,000 is allocated for each claim. a - Maximum likelihood estimation meanlog sdlog Estimation 0.08 0.32 Standard Deviation 0.014 0.010 - Normality test of log (CRA ratio) Kolmogorov-Smirnov test: p-value = 0.00 Anderson-Darling normality test: p-value = 0.00 If the second valuation date > 40% time point, linear interpolation method is not appropriate. Enterprise Risk Management 28

III. Real Data Enterprise Risk Management 29

DAY FIVE 5 PM Wait a minute Tom! I want you to think about how to use real claim data for model calibration during the weekend! Enterprise Risk Management 30

Real Data Marine claim data for distribution fitting, trend analysis, and correlation analysis - two product lines: Property and Liability - data period: 2006 2010 - accident date, payment date, and final payment amount Fit the frequency - Draw time series and decomposition Historical Frequency Decomposition ts1 5 10 15 2006 2007 2008 2009 2010 remainder trend seasonal data 5 10 15 2 4 6-4 0 4 8-2 0 1 2 Time 2006 2007 2008 2009 2010 time Enterprise Risk Management 31

Real Data Fit the frequency (continued) - Linear regression for trend analysis Log(Monthly Frequency) = Intercept + trend * (time 2006) + error term Coefficients: Estimate Std. Error t value Pr(> t ) (Intercept) 1.93060 0.15164 12.732 <2e-16 trend -0.14570 0.05919-2.462 0.0172 Residual standard error: 0.5649 on 52 degrees of freedom. Multiple R-squared: 0.1044, Adjusted R-squared: 0.08715. F-statistic: 6.06 on 1 and 52 DF, p-value: 0.01718. Trend Fitting log(ts1) 0.0 0.5 1.0 1.5 2.0 2.5 2006 2007 2008 2009 2010 Time Enterprise Risk Management 32

Real Data Fit the frequency (continued) - Detrend the frequency and fit to the lognormal distribution meanlog sdlog Estimation 9.5539259 3.1311762 Standard Deviation 0.4260991 0.3012976 - Normality test of log (detrended freq.) Kolmogorov-Smirnov test: p-value = 0.84 QQ plot of detrended freq. QQ-plot distr. normal freq.ex 0 5 10 15 20 5 10 15 20 detrend$freq Enterprise Risk Management 33

Real Data Fit the Severity Correlation calibration Frank Copula (1.3) Empirical Correlation x[,2] 0.0 0.2 0.4 0.6 0.8 1.0 Line2 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 x[,1] - Maximum Likelihood method Parameter estimate(s): 1.51 CvM statistic: 0.027 with p-value 0.35 - Inversion of Kendall s tau method Parameter estimate(s): 1.34 CvM statistic: 0.028 with p-value 0.40 Line1 What is missing? Historical reserve data which are essential for case reserve adequacy modeling. Enterprise Risk Management 34

IV. Model Enhancement Enterprise Risk Management 35

Two-state regime-switching model Sometimes the frequency and severity distribution are not stable over time - Structural change - Cyclical pattern - Idiosyncratic character The model - Two distinct distributions represent different states - Transition rules from one state to another P 11 : state 1 persistency, the probability that the state will be 1 next month given that it is 1 this month. P 12 : the probability that the state will be 2 next month given that it is 1 this month. P 21 : the probability that the state will be 1 next month given that it is 2 this month. P 22 : state 2 persistency, the probability that the state will be 2 next month given that it is 2 this month. P11 P12 Π 1 : steady probability of state 1. ( Π1 Π 2 ) = ( Π1 Π 2 ) Π 2 : steady probability of state 2. P21 P22 P = 1 P P 11 Π 21 1 = 1 P Enterprise Risk Management 36 + Π 2 12 22 = 1

Two-state regime-switching model The Simulation - Steps 1. Generate uniform random number randf 0 on range [0,1]. 2. If randf 0 < Π 1, state of first month state is 1, else, it is 2. 3. Generate uniform random number randf i on range [0,1]. 4. For previous month state I, if randf i <P i1, then state is 1, else it is 2. 5. Repeat step 3 and 4 until the end of the simulation is reached. - Test Parameters State 1: Poisson Distribution (λ = 120) State 2: Negative Binomial Distribution (size = 36, prob = 0.5) Assume the trend, monthly exposure, and seasonality are all 1 State 1 persistency: 0.5 State 2 persistency: 0.7 Seed: 16807 Π Π 1 2 1 P22 = 2 P P 11 1 P11 = 2 P P 11 22 22 1 0.7 = = 0.375 2 0.5 0.7 1 0.7 = = 0.625 2 0.5 0.7 Random Number (RN) State Criteria 0.634633548790589 2 RN>0.375 0.801362191326916 1 RN>0.7 0.529508789768443 2 RN>0.5 0.0441845036111772 2 RN<0.7 0.994539848994464 1 RN>0.7 0.21886122901924 1 RN<0.5 0.0928565948270261 1 RN<0.5 0.797880138037726 2 RN>0.5 0.129500501556322 2 RN<0.7 0.24027365935035 2 RN<0.7 0.797712686471641 1 RN>0.7 0.0569291599094868 1 RN<0.5 Enterprise Risk Management 37

Two-state regime-switching model The Test Transition Matrix - Frequency State 1: Poisson (λ = 120); State 1 persistency: 0.2 State 2: Negative Binomial (size = 36, prob = 0.5); State 2 persistency: 0.9 Line 1 Frequency Line 2 Frequency P P 11 21 0.15 = 0.1 0.85 0.9 ( Π Π ) = ( 10.53% 89.47% ) 1 P P 12 22 2 0.2 = 0.1 ( Π Π ) = ( 11.11% 88.89% ) Non Zero Cases: State 1: 391 State 1: 410 State 2: 2797 State 2: 2733 Probability of Zero Cases: State 1: 0.005% (e -10 ) State 1: 0.005% (e -10 ) State 2: 0.125 (prob 3 ) State 2: 0.135 (e -2 ) Estimated all Cases: Non Zero Cases/ (1 Probability of Zero Cases) State 1: 391 State 1: 410 State 2: 3188 (2797/(1-0.125)) State 2: 3161 (2733/(1-0.135)) Total Cases: # of simulations * 12 months = 3600 Steady-state probability (compared with P 1 & P 2 ) State 1: 391/3600 = 10.86% State 1: 410/3600 = 11.4% State 2: 1-10.86% = 89.14% State 2: 1-11.4% = 88.6% Enterprise Risk Management 38 P P 11 21 1 P P 12 22 2 0.8 0.9

Two-state regime-switching model The Test Correlation rcopula(normal.cop, 1000)[,2] 0.0 0.2 0.4 0.6 0.8 1.0 Normal Copula (0.95) Set 1 Set 2 0.0 0.2 0.4 0.6 0.8 1.0 rcopula(normal.cop, 1000)[,1] Line2 0.0 0.2 0.4 0.6 0.8 1.0 Line2 0.0 0.2 0.4 0.6 0.8 1.0 Set 3 Set 4 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Line1 Line1 Line2 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Line1 Line2 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Line1 Set 1: State 1 for line 1 and state 1 for line 1 Set 2: State 1 for line 1 and state 2 for line 2 Set 3: State 2 for line 1 and state 1 for line 1 Set 4: State 2 for line 2 and state 2 for line 2 Goodness-of-fit test is also conducted. Enterprise Risk Management 39

Interface Input Enterprise Risk Management 40

Interface Output - Additional column in claim and transaction output files to record the state - Showing state and random number while simulating Enterprise Risk Management 41

THREE MONTHS LATER Well done! It improved our reserve adequacy a lot and reduced our earnings volatility. We created a new manager position for you. Congratulations! Enterprise Risk Management 42

V. Further Development Enterprise Risk Management 43

Further Development Case reserve adequacy test shows that the assumption is not consistent with simulation data. This may be caused by the linear interpolation method used to derive 40% time point case reserve. It is suggested revising the way in which valuation date is determined in the LSM. In addition to the simulated valuation dates based on the waiting-period distribution assumption as in the LSM, some deterministic time points can be added as valuation dates. In the LSM, 0%, 40%, 70%, and 90% time-points, case reserve adequacy distribution can be input into the model. Therefore, 0%, 40%, 70% and 90% time points may be added as deterministic valuation dates. Enterprise Risk Management 44

Thank you! Enterprise Risk Management 45