Loss Cost Modeling vs. Frequency and Severity Modeling

Similar documents
Discussion of Using Tiers for Insurance Segmentation from Pricing, Underwriting and Product Management Perspectives

Bayesian Trend Selection

CAS antitrust notice CAS RPM Seminar Excess Loss Modeling. Page 1

The Casualty Actuarial Society is committed to adhering strictly to the letter and spirit of the antitrust laws. Seminars conducted under the

Demand modeling for commercial lines: enhanced pricing, business projections, and customer experience. CAS RPM Seminar March 31, 2014

Workers Compensation Ratemaking An Overview

Calculating a Loss Ratio for Commercial Umbrella. CAS Seminar on Reinsurance June 6-7, 2016 Ya Jia, ACAS, MAAA Munich Reinsurance America, Inc.

R-1: Ask a Regulator

Exploring the Fundamental Insurance Equation

CL-3: Catastrophe Modeling for Commercial Lines

3/10/2014. Exploring the Fundamental Insurance Equation. CAS Antitrust Notice. Fundamental Insurance Equation

Pricing Analytics for the Small and Medium Sized Company

March 21, 2011 Scott Romito, FCAS, MAAA Chief Actuary Louisiana Citizens Property Insurance Corporation

WC-5 Just How Credible Is That Employer? Exploring GLMs and Multilevel Modeling for NCCI s Excess Loss Factor Methodology

Perspectives on European vs. US Casualty Costing

Bayesian and Hierarchical Methods for Ratemaking

ADVENTURES IN RATE CAPPING ACTUARIAL AND BUSINESS CONSIDERATIONS. Antitrust Notice

Antitrust Notice The Casualty Actuarial Society is committed to adhering strictly to the letter and spirit i of the antitrust laws. Seminars conducted

Automating Underwriting for the Small Commercial Segment

And The Winner Is? How to Pick a Better Model

And The Winner Is? How to Pick a Better Model

Predictive Modeling GLM and Price Elasticity Model. David Dou October 8 th, 2014

3/6/2017. Private Passenger Auto Plans RPM Seminar March 28 29, 2017 San Diego, CA. Residual Markets: Last Resort Coverage.

Anti-Trust Notice. The Casualty Actuarial Society is committed to adhering strictly

Commercial Line Price Monitoring

Negative Frequency Trends? 2013 CAS Seminar on Reinsurance June 6-7,2013. Jill Cecchini FCAS, MAAA Vice President SCOR Reinsurance

Antitrust Notice 31/05/2016. Evaluating a Commercial Umbrella Rating Plan Using ISO. Table of Contents / Agenda

A Stochastic Reserving Today (Beyond Bootstrap)

Workers Compensation Ratemaking An Overview

Workers compensation: what about frequency?

Interpolation Along a Curve

Truth About Exposure Curves

GLM III - The Matrix Reloaded

Antitrust Notice. Copyright 2010 National Council on Compensation Insurance, Inc. All Rights Reserved.

Using Reserve Disclosures: From the Outside Looking In. Casualty Loss Reserve Seminar September 7, 2012 Denver, Colorado, USA

Own Risk Solvency Assessment (ORSA) Linking Risk Management, Capital Management and Strategic Planning

Reinsurance Risk Transfer Case Studies

Crop Insurance. John Buchanan CARe Seminar C-7 Philadelphia, PA June 7, CARe 2011 C7: Crop Insurance. Antitrust Notice

MORTGAGE INSURANCE: WHAT HAVE WE LEARNED? (PART 1)

Alternatives to Credit Score

Ocean Marine Portfolio Management

Trends and Breakpoints in Workers Comp Loss Costs:

Stochastic Claims Reserving _ Methods in Insurance

Casualty Loss Reserve Seminar. Trends in Professional Liability. Gregory Larcher, FCAS, MAAA Aon Risk Solutions Global Risk Consulting

Insurance Regulation State or Federal Which Works Best?

Ground Rules. CAS Antitrust Notice. Calculating the Profit Provision. Page 1. CAS Ratemaking and Product Management Seminar - March 2014

SYLLABUS OF BASIC EDUCATION SPRING 2018 Construction and Evaluation of Actuarial Models Exam 4

Current Topics in Homeowners Insurance

Quantile Regression. By Luyang Fu, Ph. D., FCAS, State Auto Insurance Company Cheng-sheng Peter Wu, FCAS, ASA, MAAA, Deloitte Consulting

Advanced Risk Management Use of Predictive Modeling in Underwriting and Pricing

The Connected Home: Trends and Implications for Insurers. CAS Centennial Celebration November 10-11, 2014

Bornhuetter Ferguson Initial Expected Loss Ratio Report. September 17 th, 2013 Boston CLRS

Generalized Linear Models II: Applying GLMs in Practice Duncan Anderson MA FIA Watson Wyatt LLP W W W. W A T S O N W Y A T T.

Session 5. A brief introduction to Predictive Modeling

Casualty Actuarial Society Predictive Modeling Seminar October 6-7, 2008 Use of GLM in Rate Filings

Session 5. Predictive Modeling in Life Insurance

Lecture 21: Logit Models for Multinomial Responses Continued

CAT Pricing: Making Sense of the Alternatives Ira Robbin. CAS RPM March page 1. CAS Antitrust Notice. Disclaimers

Navigating the Regulatory Environment Around Usage-Based Insurance

Use of GLMs in a competitive market. Ji Yao and Simon Yeung, Advanced Pricing Techniques (APT) GIRO Working Party

9/5/2013. An Approach to Modeling Pharmaceutical Liability. Casualty Loss Reserve Seminar Boston, MA September Overview.

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is:

Captive Discussion September 6, Paul Boatman, CPCU, ARM Director of Corporate Risk Management and Insurance

Concurrent Session 1: CAS/CARe Seminar, Bermuda, June 6-7, 2013 John Buchanan, ISO Excess and Reinsurance

An industry survey of persistency modelling A case study Standard Life

10/13/2015. Antitrust Notice. The Role of Private Insurance In Promoting Sustainability. What is Sustainability?

Contents Utility theory and insurance The individual risk model Collective risk models

Structured Tools to Help Organize One s Thinking When Performing or Reviewing a Reserve Analysis

Agenda. Guy Carpenter

Flood Risk Assessment Insuring An Emerging CAT

Solutions to the Fall 2015 CAS Exam 5

Solvency II overview

Fundamentals of Catastrophe Modeling. CAS Ratemaking & Product Management Seminar Catastrophe Modeling Workshop March 15, 2010

Modeling Medical Professional Liability Damage Caps An Illinois Case Study

Catastrophe Reserving Challenges

November 3, Transmitted via to Dear Commissioner Murphy,

2009 Actuarial Research Conference The Role of Research at ISO. Glenn Meyers ISO Innovative Analytics

The Honorable Teresa D. Miller, Pennsylvania Insurance Commissioner. John R. Pedrick, FCAS, MAAA, Vice President Actuarial Services

Introduction to Loss Distribution Approach

Obtaining Predictive Distributions for Reserves Which Incorporate Expert Opinions R. Verrall A. Estimation of Policy Liabilities

Modeling. joint work with Jed Frees, U of Wisconsin - Madison. Travelers PASG (Predictive Analytics Study Group) Seminar Tuesday, 12 April 2016

Actuarial Research on the Effectiveness of Collision Avoidance Systems FCW & LDW. A translation from Hebrew to English of a research paper prepared by

Exam STAM Practice Exam #1

How Advanced Pricing Analysis Can Support Underwriting by Claudine Modlin, FCAS, MAAA

Institute of Actuaries of India Subject CT6 Statistical Methods

The Matrix Inverted A Primer in GLM Theory and Practical Issues. March 11-12, 2004 CAS Ratemaking Seminar Roosevelt Mosley, FCAS, MAAA

Loss Simulation Model Testing and Enhancement

DRAFT 2011 Exam 5 Basic Ratemaking and Reserving

Commutations. What s in it for the Cedant? Commutation Considerations Case Studies Pricing Commutations general approach and examples

A Comprehensive, Non-Aggregated, Stochastic Approach to Loss Development

SOCIETY OF ACTUARIES EXAM STAM SHORT-TERM ACTUARIAL MATHEMATICS EXAM STAM SAMPLE QUESTIONS

By-Peril Deductible Factors

Homework Problems Stat 479

2018 Predictive Analytics Symposium Session 10: Cracking the Black Box with Awareness & Validation

Proxies. Glenn Meyers, FCAS, MAAA, Ph.D. Chief Actuary, ISO Innovative Analytics Presented at the ASTIN Colloquium June 4, 2009

Actuarial Science. Summary of Requirements. University Requirements. College Requirements. Major Requirements. Requirements of Actuarial Science Major

Subject CS1 Actuarial Statistics 1 Core Principles. Syllabus. for the 2019 exams. 1 June 2018

Definition. Captivated by Captives 2013 Ratemaking and Product Management Seminar Huntington Beach, California. Agenda. What is a Captive?

2011 RPM Basic Ratemaking Workshop. Agenda. CAS Exam 5 Reference: Basic Ratemaking Chapter 11: Special Classification *

Construction Site Regulation and OSHA Decentralization

Transcription:

Loss Cost Modeling vs. Frequency and Severity Modeling 2013 CAS Ratemaking and Product Management Seminar March 13, 2013 Huntington Beach, CA Jun Yan Deloitte Consulting LLP

Antitrust Notice The Casualty Actuarial Society is committed to adhering strictly to the letter and spirit of the antitrust laws. Seminars conducted under the auspices of the CAS are designed solely to provide a forum for the expression of various points of view on topics described in the programs or agendas for such meetings. Under no circumstances shall CAS seminars be used as a means for competing companies or firms to reach any understanding expressed or implied that restricts competition or in any way impairs the ability of members to exercise independent business judgment regarding matters affecting competition. It is the responsibility of all seminar participants to be aware of antitrust regulations, to prevent any written or verbal discussions that appear to violate these laws, and to adhere in every respect to the CAS antitrust compliance policy.

Description of Frequency-Severity Modeling Claim Frequency = Claim Count / Exposure Claim Severity = Loss / Claim Count It is a common actuarial assumption that: Claim Frequency has an over-dispersed Poisson distribution Claim Severity has a Gamma distribution Loss Cost = Claim Frequency x Claim Severity Can be much more complex

Description of Frequency-Severity Modeling A more sophisticated Frequency/Severity model design o Frequency Over-dispersed Poisson o Capped Severity Gamma o Propensity of excess claim Binomial o Excess Severity Gamma o Expected Loss Cost = Frequency x Capped Severity + Propensity of excess claim x Excess Severity o Fit a model to expected loss cost to produce loss cost indications by rating variable

Description of Loss Cost Modeling Tweedie Distribution It is a common actuarial assumption that: Claim count is Poisson distributed Size-of-Loss is Gamma distributed Therefore the loss cost (LC) distribution is Gamma- Poisson Compound distribution, called Tweedie distribution LC = X1 + X2 + + XN Xi ~ Gamma for i {1, 2,, N} N ~ Poisson

Description of Loss Cost Modeling Tweedie Distribution (Cont.) Tweedie distribution is belong to exponential family o Var(LC) = p is a scale parameter is the expected value of LC p є (1,2) p is a free parameter must be supplied by the modeler As p 1: LC approaches the Over-Dispersed Poisson As p 2: LC approaches the Gamma

Data Description Structure On a vehicle-policy term level Total 100,000 vehicle records Separated to Training and Testing Subsets: Training Dataset: 70,000 vehicle records Testing Dataset: 30,000 Vehicle Records Coverage: Comprehensive

Numerical Example 1 GLM Setup In Total Dataset Frequency Model Target = Frequency = Claim Count /Exposure Link = Log Distribution = Poison Weight = Exposure Variable = Territory Agegrp Type Vehicle_use Vehage_group Credit_Score AFA Severity Model Target = Severity = Loss/Claim Count Link = Log Distribution = Gamma Weight = Claim Count Variable = Territory Agegrp Type Vehicle_use Vehage_group Credit_Score AFA Loss Cost Model Target = loss Cost = Loss/Exposure Link = Log Distribution = Tweedie Weight = Exposure P=1.30 Variable = Territory Agegrp Type Vehicle_use Vehage_group Credit_Score AFA

Numerical Example 1 How to select p for the Tweedie model? Treat p as a parameter for estimation Test a sequence of p in the Tweedie model The Log-likelihood shows a smooth inverse U shape Select the p that corresponding to the maximum loglikelihood Value p Optimization Log-likelihood Value p -12192.25 1.20-12106.55 1.25-12103.24 1.30-12189.34 1.35-12375.87 1.40-12679.50 1.45-13125.05 1.50-13749.81 1.55-14611.13 1.60

Numerical Example 1 GLM Output (Models Built in Total Data) Frequency Model Severity Model Frq * Sev Loss Cost Model (p=1.3) Rating Estimate Rating Factor Estimate Rating Factor Rating Factor Estimate Factor Intercept -3.19 0.04 7.32 1510.35 62.37 4.10 60.43 Territory T1 0.04 1.04-0.17 0.84 0.87-0.13 0.88 Territory T2 0.01 1.01-0.11 0.90 0.91-0.09 0.91 Territory T3 0.00 1.00 0.00 1.00 1.00 0.00 1.00................ agegrp Yng 0.19 1.21 0.06 1.06 1.28 0.25 1.29 agegrp Old 0.04 1.04 0.11 1.11 1.16 0.15 1.17 agegrp Mid 0.00 1.00 0.00 1.00 1.00 0.00 1.00 Type M -0.13 0.88 0.05 1.06 0.93-0.07 0.93 Type S 0.00 1.00 0.00 1.00 1.00 0.00 1.00 Vehicle_Use PL 0.05 1.05-0.09 0.92 0.96-0.04 0.96 Vehicle_Use WK 0.00 1.00 0.00 1.00 1.00 0.00 1.00

Numerical Example 1 Findings from the Model Comparison The LC modeling approach needs less modeling efforts, the FS modeling approach shows more insights. What is the driver of the LC pattern, Frequency or Severity? Frequency and severity could have different patterns.

Numerical Example 1 Findings from the Model Comparison Cont. The loss cost relativities based on the FS approach could be fairly close to the loss cost relativities based on the LC approach, when Same pre-glm treatments are applied to incurred losses and exposures for both modeling approaches o Loss Capping o Exposure Adjustments Same predictive variables are selected for all the three models (Frequency Model, Severity Model and Loss Cost Model The modeling data is credible enough to support the severity model

Numerical Example 2 GLM Setup In Training Dataset Frequency Model Target = Frequency = Claim Count /Exposure Link = Log Distribution = Poison Weight = Exposure Variable = Territory Agegrp Deductable Vehage_group Credit_Score AFA Severity Model Target = Severity = Loss/Claim Count Link = Log Distribution = Gamma Weight=Claim Count Variable = Territory Agegrp Deductable Vehage_group Credit_Score AFA Severity Model (Reduced) Target = Severity = Loss/Claim Count Link = Log Distribution = Gamma Weight = Claim Count Variable = Territory Agegrp Vehage_group AFA Type 3 Statistics DF ChiSq Pr > Chisq territory 2 5.9 0.2066 agegrp 2 25.36 <.0001 vehage_group 4 294.49 <.0001 Deductable 2 41.07 <.0001 credit_score 2 64.1 <.0001 AFA 2 15.58 0.0004 Type 3 Statistics DF ChiSq Pr > Chisq territory 2 15.92 0.0031 agegrp 2 2.31 0.3151 vehage_group 4 36.1 <.0001 Deductable 2 1.64 0.4408 credit_score 2 2.16 0.7059 AFA 2 11.72 0.0028 Type 3 Statistics DF ChiSq Pr > Chisq Territory 2 15.46 0.0038 agegrp 2 2.34 0.3107 vehage_group 4 35.36 <.0001 AFA 2 11.5 0.0032

Numerical Example 2 GLM Output (Models Built in Training Data) Frequency Model Severity Model Frq * Sev Rating Rating Rating Estimate Factor Estimate Factor Factor Loss Cost Model (p=1.3) Rating Estimate Factor Territory T1 0.03 1.03-0.17 0.84 0.87-0.15 0.86 Territory T2 0.02 1.02-0.11 0.90 0.92-0.09 0.91 Territory T3 0.00 1.00 0.00 1.00 1.00 0.00 1.00. Deductable 100 0.33 1.38 1.38 0.36 1.43 Deductable 250 0.25 1.28 1.28 0.24 1.27 Deductable 500 0.00 1.00 1.00 0.00 1.00 CREDIT_SCORE 1 0.82 2.28 2.28 0.75 2.12 CREDIT_SCORE 2 0.52 1.68 1.68 0.56 1.75 CREDIT_SCORE 3 0.00 1.00 1.00 0.00 1.00 AFA 0-0.25 0.78-0.19 0.83 0.65-0.42 0.66 AFA 1-0.03 0.97-0.19 0.83 0.80-0.21 0.81 AFA 2+ 0.00 1.00 0.00 1.00 1.00 0.00 1.00

Numerical Example 2 Model Comparison In Testing Dataset In the testing dataset, generate two sets of loss cost Scores corresponding to the two sets of loss cost estimates Score_fs (based on the FS modeling parameter estimates) Score_lc (based on the LC modeling parameter estimates) Compare goodness of fit (GF) of the two sets of loss cost scores in the testing dataset Log-Likelihood

Numerical Example 2 Model Comparison In Testing Dataset - Cont GLM to Calculate GF Stat of Score_fs GLM to Calculate GF Stat of Score_lc Data: Testing Dataset Target: Loss Cost Predictive Var: Non Error: tweedie Link: log Weight: Exposure P: 1.15/1.20/1.25/1.30/1.35/1.40 Offset: log(score_fs) Data: Testing Dataset Target: Loss Cost Predictive Var: Non Error: tweedie Link: log Weight: Exposure P: 1.15/1.20/1.25/1.30/1.35/1.40 Offset: log(score_lc)

Numerical Example 2 Model Comparison In Testing Dataset - Cont GLM to Calculate GF Stat Using Score_fs as offset GLM to Calculate GF Stat Using Score_lc as offset Log likelihood from output P=1.15 log-likelihood=-3749 P=1.20 log-likelihood=-3699 P=1.25 log-likelihood=-3673 P=1.30 log-likelihood=-3672 P=1.35 log-likelihood=-3698 P=1.40 log-likelihood=-3755 Log likelihood from output P=1.15 log-likelihood=-3744 P=1.20 log-likelihood=-3694 P=1.25 log-likelihood=-3668 P=1.30 log-likelihood=-3667 P=1.35 log-likelihood=-3692 P=1.40 log-likelihood=-3748 The loss cost model has better goodness of fit.

Numerical Example 2 Findings from the Model Comparison In many cases, the frequency model and the severity model will end up with different sets of variables. More than likely, less variables will be selected for the severity model Data credibility for middle size or small size companies For certain low frequency coverage, such as BI As a result F_S approach shows more insights, but needs additional effort to roll up the frequency estimates and severity estimates to LC relativities In these cases, frequently, the LC model shows better goodness of fit

A Frequently Applied Methodology Loss Cost Refit Loss Cost Refit Model frequency and severity separately Generate frequency score and severity score LC Score = (Frequency Score) x (Severity Score) Fit a LC model to the LC score to generate LC Relativities by Rating Variables Originated from European modeling practice Considerations and Suggestions Different regulatory environment for European market and US market An essential assumption The LC score is unbiased. Validation using a LC model

Constrained Rating Plan Study Update a rating plan with keeping certain rating tables or certain rating factors unchanged One typical example is to create a rating tier variable on top of an existing rating plan Catch up with marketing competitions to avoid adverse selection Manage disruptions

Constrained Rating Plan Study - Cont Apply GLM offset techniques The offset factor is generated using the unchanged rating factors. Typically, for creating a rating tier on top of an existing rating plan, the offset factor is given as the rating factor of the existing rating plan. All the rating factors are on loss cost basis. It is natural to apply the LC modeling approach for rating tier development.

How to Select Modeling Approach? Data Related Considerations Modeling Efficiency Vs. Business Insights Quality of Modeling Deliverables Goodness of Fit (on loss cost basis) Other model comparison scenarios Dynamics on Modeling Applications Class Plan Development Rating Tier or Score Card Development Post Modeling Considerations Run a LC model to double check the parameter estimates generated based on a F-S approach

An Exhibit from a Brazilian Modeler