Obtaining Predictive Distributions for Reserves Which Incorporate Expert Opinions R. Verrall A. Estimation of Policy Liabilities

Similar documents
Stochastic Claims Reserving _ Methods in Insurance

Clark. Outside of a few technical sections, this is a very process-oriented paper. Practice problems are key!

Obtaining Predictive Distributions for Reserves Which Incorporate Expert Opinion

Exam 7 High-Level Summaries 2018 Sitting. Stephen Roll, FCAS

Institute of Actuaries of India Subject CT6 Statistical Methods

Integrating Reserve Variability and ERM:

FAV i R This paper is produced mechanically as part of FAViR. See for more information.

The Leveled Chain Ladder Model. for Stochastic Loss Reserving

DRAFT 2011 Exam 7 Advanced Techniques in Unpaid Claim Estimation, Insurance Company Valuation, and Enterprise Risk Management

Stochastic reserving using Bayesian models can it add value?

SYLLABUS OF BASIC EDUCATION 2018 Estimation of Policy Liabilities, Insurance Company Valuation, and Enterprise Risk Management Exam 7

Double Chain Ladder and Bornhutter-Ferguson

Jacob: What data do we use? Do we compile paid loss triangles for a line of business?

Where s the Beef Does the Mack Method produce an undernourished range of possible outcomes?

Reserving Risk and Solvency II

The Retrospective Testing of Stochastic Loss Reserve Models. Glenn Meyers, FCAS, MAAA, CERA, Ph.D. ISO Innovative Analytics. and. Peng Shi, ASA, Ph.D.

[D7] PROBABILITY DISTRIBUTION OF OUTSTANDING LIABILITY FROM INDIVIDUAL PAYMENTS DATA Contributed by T S Wright

A Review of Berquist and Sherman Paper: Reserving in a Changing Environment

Exam-Style Questions Relevant to the New Casualty Actuarial Society Exam 5B G. Stolyarov II, ARe, AIS Spring 2011

WC-5 Just How Credible Is That Employer? Exploring GLMs and Multilevel Modeling for NCCI s Excess Loss Factor Methodology

Introduction to Casualty Actuarial Science

Statistical Modeling Techniques for Reserve Ranges: A Simulation Approach

A Comprehensive, Non-Aggregated, Stochastic Approach to. Loss Development

The Fundamentals of Reserve Variability: From Methods to Models Central States Actuarial Forum August 26-27, 2010

Developing a reserve range, from theory to practice. CAS Spring Meeting 22 May 2013 Vancouver, British Columbia

GIIRR Model Solutions Fall 2015

Jacob: The illustrative worksheet shows the values of the simulation parameters in the upper left section (Cells D5:F10). Is this for documentation?

Basic Reserving: Estimating the Liability for Unpaid Claims

Study Guide on Testing the Assumptions of Age-to-Age Factors - G. Stolyarov II 1

IASB Educational Session Non-Life Claims Liability

Methods and Models of Loss Reserving Based on Run Off Triangles: A Unifying Survey

Xiaoli Jin and Edward W. (Jed) Frees. August 6, 2013

A Comprehensive, Non-Aggregated, Stochastic Approach to Loss Development

ACTEX Learning. Learn Today. Lead Tomorrow. ACTEX Study Manual for. CAS Exam 7. Spring 2018 Edition. Victoria Grossack, FCAS

Validating the Double Chain Ladder Stochastic Claims Reserving Model

UPDATED IAA EDUCATION SYLLABUS

Back-Testing the ODP Bootstrap of the Paid Chain-Ladder Model with Actual Historical Claims Data

An Enhanced On-Level Approach to Calculating Expected Loss Costs

General Takaful Workshop

Introduction to Casualty Actuarial Science

Reserve Risk Modelling: Theoretical and Practical Aspects

GI ADV Model Solutions Fall 2016

A new -package for statistical modelling and forecasting in non-life insurance. María Dolores Martínez-Miranda Jens Perch Nielsen Richard Verrall

Solutions to the Fall 2013 CAS Exam 5

ExcelSim 2003 Documentation

A Stochastic Reserving Today (Beyond Bootstrap)

Structured Tools to Help Organize One s Thinking When Performing or Reviewing a Reserve Analysis

THE UNIVERSITY OF TEXAS AT AUSTIN Department of Information, Risk, and Operations Management

The Analysis of All-Prior Data

Estimation and Application of Ranges of Reasonable Estimates. Charles L. McClenahan, FCAS, ASA, MAAA

arxiv: v1 [q-fin.rm] 13 Dec 2016

Modelling the Claims Development Result for Solvency Purposes

Stochastic Loss Reserving with Bayesian MCMC Models Revised March 31

Patrik. I really like the Cape Cod method. The math is simple and you don t have to think too hard.

Solvency Assessment and Management: Steering Committee. Position Paper 6 1 (v 1)

Study Guide on LDF Curve-Fitting and Stochastic Reserving for SOA Exam GIADV G. Stolyarov II

With the Benefit of Hindsight An Analysis of Loss Reserving Methods. So Many Methods, So Little Time. Overview

TABLE OF CONTENTS - VOLUME 2

Corporate Finance, Module 21: Option Valuation. Practice Problems. (The attached PDF file has better formatting.) Updated: July 7, 2005

Contents Utility theory and insurance The individual risk model Collective risk models

Basic Ratemaking CAS Exam 5

Solutions to the New STAM Sample Questions


The Retrospective Testing of

SOCIETY OF ACTUARIES Advanced Topics in General Insurance. Exam GIADV. Date: Thursday, May 1, 2014 Time: 2:00 p.m. 4:15 p.m.

Statistical problems, statistical solutions. Glen Barnett

Practical example of an Economic Scenario Generator

MUNICH CHAIN LADDER Closing the gap between paid and incurred IBNR estimates

I BASIC RATEMAKING TECHNIQUES

MA 1125 Lecture 05 - Measures of Spread. Wednesday, September 6, Objectives: Introduce variance, standard deviation, range.

Non-life insurance mathematics. Nils F. Haavardsson, University of Oslo and DNB Skadeforsikring

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is:

Subject CS1 Actuarial Statistics 1 Core Principles. Syllabus. for the 2019 exams. 1 June 2018

Analysis of Methods for Loss Reserving

From Double Chain Ladder To Double GLM

Non parametric IBNER projection

2017 IAA EDUCATION SYLLABUS

Changes to Exams FM/2, M and C/4 for the May 2007 Administration

APPROACHES TO VALIDATING METHODOLOGIES AND MODELS WITH INSURANCE APPLICATIONS

Individual Claims Reserving with Stan

Arius Deterministic Exhibit Statistics

Technical Provisions in Reinsurance: The Actuarial Perspective

CAS Course 3 - Actuarial Models

Do You Really Understand Rates of Return? Using them to look backward - and forward

Background. April 2010 NCCI RESEARCH BRIEF. The Critical Role of Estimating Loss Development

Statistics 431 Spring 2007 P. Shaman. Preliminaries

SYLLABUS OF BASIC EDUCATION SPRING 2018 Construction and Evaluation of Actuarial Models Exam 4

As Helmuth Karl Bernhard Graf von Moltke (German Field Marshal from the 18 th century) noted, no plan survives contact with the enemy.

Solutions to the Fall 2015 CAS Exam 5

Exploring the Fundamental Insurance Equation

Corporate Finance, Module 3: Common Stock Valuation. Illustrative Test Questions and Practice Problems. (The attached PDF file has better formatting.

A Loss Reserving Method for Incomplete Claim Data Or how to close the gap between projections of payments and reported amounts?

Online Appendix (Not intended for Publication): Federal Reserve Credibility and the Term Structure of Interest Rates

GI IRR Model Solutions Spring 2015

Anatomy of Actuarial Methods of Loss Reserving

Alternative VaR Models

CENTRAL OHIO RISK MANAGEMENT ASSOCIATION (CORMA) ACTUARIAL REPORT ON UNPAID LOSS AND LOSS ADJUSTMENT EXPENSES AS OF SEPTEMBER 30, 2017

The Experts In Actuarial Career Advancement. Product Preview. For More Information: or call 1(800)

Subject CS2A Risk Modelling and Survival Analysis Core Principles

Homeowners Ratemaking Revisited

Transcription:

Obtaining Predictive Distributions for Reserves Which Incorporate Expert Opinions R. Verrall A. Estimation of Policy Liabilities LEARNING OBJECTIVES 5. Describe the various sources of risk and uncertainty that are associated with the determination of reserves. Calculate risk margins that consider these sources of risk and uncertainty. 6. Calculate the mean and prediction error of a reserve given an underlying statistical model. 7. Derive predictive distributions using bootstrapping and simulation techniques. 8. Identify data issues and related model adjustments for reserving models. 9. Test assumptions underlying reserve models. 10. Develop a distribution of reserves using weights and multiple stochastic models. KNOWLEDGE STATEMENTS a. Systemic risks and independent risks b. Limitations of quantitative risk assessment c. Risk correlations d. Testing and evaluation of risk models a. Distributions and distribution-free models b. Comparison of Chain Ladder stochastic models a. Comparison of methods b. Simulation using bootstrapping c. Simulation from parameters d. Bayesian methods a. Bayesian methods b. Adjustments to various reserving techniques c. Comparison of ODP Bootstrap and GLM Bootstrap models Range of weight for Learning Objectives A.5 through A.10 collectively: 22-24 percent READINGS Marshall Shapland Verrall Meyers (2015) Synopsis When expert opinion is used to make the selection of loss reserves, it is not clear if the prediction errors are still valid. This paper uses Bayesian techniques to allow the reserving actuary to incorporate the expert opinion, and also maintain the integrity of the prediction error. The author first provides background on stochastic methods, including Mack, OD Poisson, and GLM, then presents the theory of the Bayesian model, and finally presents a handful of examples of the results. Introduction It is now possible to use a variety of methods to obtain reserve estimates, prediction intervals and prediction distributions. When doing reserving, the actuary may incorporate expert opinion, for example adjusting data to reflect changes in benefits or claims handling, or to select LDFs different from the all year average. In making these changes it is not clear how the prediction distribution should be changed. How does one calculate a confidence interval? Copyright 2018 by The Infinite Actuary Page 1 Verrall A5 10

A Bayesian approach provides a solution, as it allows us to take into account our a priori opinion and also the strength of that opinion This paper focuses on use of prior knowledge in selecting LDFs and also the Bornhuetter Ferguson method. Practically, Bayesian methods have been made much easier by the development of Markov Chain Monte Carlo techniques The following notation is used: Incremental claims for accident year and development period Cumulative claims = Expected development from to Weighted average LDFs = Copyright 2018 by The Infinite Actuary Page 2 Verrall A5 10

Section 3 Stochastic Models for the Chainladder Technique This section introduces Stochastic Models that have the same estimate of Unpaid losses as the Chainladder method. Mack (1993) used a non parametric approach and specified only the first two moments of cumulative claims The mean is the same as the Chainladder method, and the variance is proportional to claims reported to date. OD Poisson Separately from Mack, GLM models were used to estimate reserves Incremental claims are assumed to be distributed OD Poisson, log ; 0 The model result is the same as the Chainladder method. It also provides a prediction error Negative Binomial Model 1 The parameters are effectively incremental LDFs. This method along with the OD Poisson requires the incremental losses in a column to be positive. For each model, we are able to calculate the Prediction Variance Process Variance Estimation Variance The Prediction Error takes into account both the Estimation Variance, the error in estimating our parameters, and the Process Variance. One advantage of the bootstrap simulation is that it allows us to calculate the Prediction Error. Copyright 2018 by The Infinite Actuary Page 3 Verrall A5 10

Section 4 Incorporation Expert Opinion About the Development Factors The goal is to consider expert opinion in setting the reserves. Specifically to find a way to incorporate the expert opinion into setting the reserves, and determine a variance that makes sense. In Section 6, we will look at two cases: Use a different LDF for a particular row (maybe the reserve level is high or low) Choice of how many years of data to use in determining the LDF Up until now we have used the same for the development of every row in column j. We can loosen this restriction. In the following example we assume a 10 x 10 triangle. 1) If we believe the LDF for development period 2 to 3 should be 2.000 for the 3 most recent years (ie. 8,9,10), then we could do the following, we require:,,,, for 8, for 3 This creates 2 LDF s for column 3. One for Accident Year s 8, 9 & 10; and one for the other AY s. The other LDFs are treated as usual one LDF per column. Finally, for the prior distributions, we set, 2.000 with a variance. Where is set to reflect the strength of the prior information. After we examine this example, we will also examine the following: 2) We would like to use the last 5 diagonals to select LDFs. Therefore, we set the parameters as follows: is among the last 5 diagonals is not among the last 5 diagonals Set prior distributions with large variances so that the results are based on the data. The prior is usually selected (Gamma, LogNormal are often used), so that the numerical procedures work as well as possible. Copyright 2018 by The Infinite Actuary Page 4 Verrall A5 10

Section 5: A Bayesian Model for the BF Method In this section, the author introduces stochastic row parameters. We make the following assumptions: Assumptions: Losses are Over Dispersed Poisson, with a dispersion factor. (We are not told how to estimate, but presumably it must be estimated) o We use deterministic column parameters, based on the LDFs from the Chainladder method o Expected % Paid in period o 1 We also have row parameters, which are an estimate of ultimate losses The row parameters, have a prior distribution, a convenient one is Gamma o o This prior distribution is what allows us to incorporate expert opinion It s helpful to define as we did in Mack (2000). Expected % Paid up to age With these assumptions, a Bayesian procedure, gives us a result that is a blend of the Chainladder and BF results (just like Benktander). The estimate of incremental losses, is: Chainladder:, 1 That is the cumulative losses at the prior period times the LDF less one. Bornhuetter Ferguson: This is the mean of the prior times the expected percent paid in period. The credibility weight depends on: the percent paid at the prior age parameter from the prior distribution parameter from ODP Putting it all together, the mean is:, Solution to the above model: 1) Calculate Chainladder forecast in each cell 2) Calculate a priori forecast in each cell 3) Calculate in each cell: 4) Weight the Chainladder and a priori forecasts Copyright 2018 by The Infinite Actuary Page 5 Verrall A5 10

An example: Estimate the unpaid losses using Verrall s method with stochastic row parameters. Table 1 has the incremental paid losses. Table 2 has the prior distribution for each row parameter. Incremental Paid (000) Paid to AY 1 2 3 4 Date (000) 2012 6,927 4,073 1,568 740 13,308 2013 5,655 2,348 1,867 471 10,347 2014 5,763 3,217 2,364 571 11,915 2015 5,460 3,885 1,898 11,243 2016 6,953 4,355 11,308 2017 8,191 8,191 1 2 2 3 3 4 Incremental LDF 1.581 1.206 1.053 The incremental losses are modeled using an OD Poisson model with dispersion factor of 80,000. Table 2 Standard AY Mean (000) Deviation (000) 2012 11,200 1,000 2013 11,600 1,200 2014 12,200 1,200 2015 12,700 1,300 2016 13,700 1,400 2017 14,600 1,500 Copyright 2018 by The Infinite Actuary Page 6 Verrall A5 10

Solution We re given the incremental LDFs, so we convert these to cumulative LDFs, and calculate, and column parameters 1 2 3 4 Cumulative LDF 2.008 1.270 1.053 1.000 0.498 0.787 0.950 1.000 0.498 0.289 0.163 0.050 Now, calculate the Chainladder forecast losses. I prefer to calculate Ultimate, then multiply by in each cell. You could also just develop the losses directly. Ultimate 2017: 8,191 2.008 16,448, 16,448 0.289 4,753 Incremental Mean from Chainladder AY 1 2 3 4 Chainladder Ultimate 2015 592 11,839 2016 2,341 718 14,361 2017 4,753 2,681 822 16,448 0.498 0.289 0.163 0.050 Now, do the same with the Prior Mean, 14,600 0.289 4,219 Incremental Mean from BF AY 1 2 3 4 Prior Mean 2015 635 12,700 2016 2,233 685 13,700 2017 4,219 2,380 730 14,600 0.498 0.289 0.163 0.050 In this problem, we weren t given the, so we need to calculate them. I can imagine a problem, where you are given the directly. We have: β 14,600 0.00649 1,500 Note that I left all the figures in thousands, when calculating. You can choose to work in 1 s, or in thousands (or some other unit), but you need to stay consistent. The you calculate will be different depending on the unit; but the results will be the same as long as you are consistent. AY 2015 0.00751 2016 0.00699 2017 0.00649 We need to calculate. Since we have selected to work in thousands, we write: 80 Copyright 2018 by The Infinite Actuary Page 7 Verrall A5 10

Now, we can calculate 0.498, 0.00649 80 0.498 I find it helpful to put the on the right, and the shifted over one cell on the bottom. AY 1 2 3 4 2015 0.613 0.00751 2016 0.585 0.629 0.00699 2017 0.490 0.603 0.647 0.00649 0.498 0.787 0.950 The final step is the credibility weighting. For 2017, age 2, we have: 4,753 0.490 4,219 1 0.490 4,481 Bayesian Estimate AY 1 2 3 4 2015 609 2016 2,296 706 2017 4,481 2,562 780 Summary Assumptions Losses are ODP, with dispersion factor Use deterministic column parameters volume weighted average for LDF Select Gamma Distribution for the Prior Distribution of each row parameter Results Use Chainladder to forecast incremental losses in each cell Use BF to forecast incremental losses in each cell, use the mean of the prior distribution for each row to calculate Calculate for each cell Credibility Weight Notice that for the prior distribution, large means a small variance for the prior. Thus, we would put less credibility on the actual losses (since we are confident in the prior). This formula for does exactly that (since will be small for large ). This means, we can achieve the BF result by using a large for each row; and achieve Chainladder by using a small for each row; or use to find something in between. Notice also how it responds to. Large, which represents large process variance, also has the impact of putting less credibility on the actual losses. Finally, the larger, the more credibility is given to the actual losses. This is natural as the accident year develops, we want to give more credibility to the actual results. Notation The paper has the credibility written in the following form: I don t find that as intuitive as: Also, for the BF estimate, the paper uses: 1 Again, I think is much more intuitive; they are mathematically equivalent. Copyright 2018 by The Infinite Actuary Page 8 Verrall A5 10

Column Parameters Up to now, we estimated the column parameters ( ) using the deterministic chain ladder technique, and then used a stochastic method to estimate the row parameters. We would prefer to use a stochastic method to find both the row and column parameters. The author does this, by using a prior distribution, first for the columns, and then the rows. For the columns, the prior distribution is Gamma with a wide variance. The result is the following estimate of losses in each cell. 1, The are the new row parameters. The sum is down the column, rather than across the row. It is best seen with an example. Later we will show how to calculate the parameters, but first we can see how to use them: We have the following incremental paid triangle 630 239 117 50 1.000 418 313 208 1.968 394 280 1.421 471 1.326 2,4 1.968 1 50 48 3,3 1.421 1 117 208 137 4,2 1.326 1 239 313 280 271 630 239 117 50 418 313 208 48 394 280 137 471 271 This makes sense, the expected losses at a given age, should be similar to the losses for another accident year at the same age. This method is in effect taking the average of the cells in the column, and using that to estimate the prospective losses. For it to be an actual average, the s would have to be the following: 1.000 2.000 1.500 1.333 For example: 3,3 1. 1 117 208 163 Is summing 2 values, and multiplies by 0.500 4,2 1. 1 239 313 280 277 This sums 3 values, and multiplies by 0.333 Continuing, we would also have: 1.250 1.200 Now, we consider why the s for our dataset above, were not the s that give an average. The reason is that, the s take into account the level of the losses in the row, relative to other rows. We had 1.968 above, this is because the level of losses in Accident Year 2, are about 3% below the level of losses in Accident Year 1; so when, forecasting,, we reduce the loss at, by about 3%. Copyright 2018 by The Infinite Actuary Page 9 Verrall A5 10

Calculate There is an errata for Verrall. The 2011 12 sittings, equation (5.4) has been wrong. It has been corrected, and is therefore now testable. Don t memorize the formula, memorize how to calculate, as shown below. The following notation will be handy Estimate of Ultimate from the Chainladder method %Unpaid losses for Accident Year (also based on the CL method) We ll use the same triangle from above to calculate the s. Incremental Paid 630 239 117 50 418 313 208 394 280 471 We use the Chainladder method to determine an estimate of Ultimate losses and % Unpaid by Accident Year Now, we can calculate the s. Incremental Paid % Unpaid Ultimate Paid to Date 630 239 117 50 0.0% 1,036 1,036 418 313 208 4.9% 987 939 394 280 20.9% 852 674 471 49.8% 939 471 Incremental LDF 1.577 1.203 1.051 Cumulative LDF 1.994 1.264 1.051 % Paid 50.2% 29.0% 16.0% 4.9% 1. 987 4.9% 50 1 48.4 50. The 48.4 is an estimate of the losses in, and we divide that by the losses in 50. The method concludes that the losses in AY 2 are slightly less than the losses in AY 1, based on this ratio. We need an estimate of for the next step, so we use the equation from above 1.968 1 50 48.4 852 20.9% 1 117 208 5048.4 1178.1. 423.4 Notice that 178.1 is an estimate of the unpaid losses for AY 3 (ages 3 & 4), and we divide this by the losses at ages 3 & 4 for the two older years. Incremental Paid 630 239 117 50 418 313 208 394 280 471 The other s are similar. The numerator is the expected Unpaid based on Chainladder. The numerator is all the losses at the same ages for older accident years. Copyright 2018 by The Infinite Actuary Page 10 Verrall A5 10

939 49.8% 1 239 313 280 117 208 136.8 50 48.4 41.4 467.6 1. 1,433.6 We had to calculate 117 208 1.421 1 136.8 50 48.4 1.421 1 41.4 Incremental Paid 630 239 117 50 418 313 208 394 280 471 Once you see the pattern, the calculation is not difficult. The hardest part is the preliminary calculation of LDFs, Ultimate, and percent paid and Unpaid. Example 1: You are given the following row parameters. Calculate the expected losses for accident year 4 at development period 8, 9 and 10,,, 1.000 2.390 1.577 1.360 Year\Age 6 7 8 9 10 1 574 146 140 227 68 2 321 528 266 425 3 147 496 280 4 352 206 Example 2: Estimate the parameters for the following triangle: Incremental Paid 839 1,117 414 137 826 1,288 634 681 1,459 728 Copyright 2018 by The Infinite Actuary Page 11 Verrall A5 10

Solution 1: 1,, 1.360 1 140 266 280 To determine, need to have,. That is, you calculate the mean going down the column., 1.577 1 227 425 376, 1.360 1 227 425 376 Year\Age 6 7 8 9 10 1 574 146 140 227 68 2 321 528 266 425 94.5 3 147 496 280 376 93.8 4 352 206 247 370 92.3, 2.390 1 68 94.5, 1.577 1 68 94.5 93.8, 1.360 1 68 94.5 93.8. The total reserve for accident year 4 is: 247+370+92.3=709 Copyright 2018 by The Infinite Actuary Page 12 Verrall A5 10

Solution 2: Start off by doing standard Chainladder calculations. Incremental Paid % Unpaid Ultimate Paid to Date 839 1,117 414 137 0.0% 2,507 2,507 826 1,288 634 5.5% 2,907 2,748 681 1,459 24.8% 2,846 2,140 728 71.6% 2,562 728 Incremental LDF 2.647 1.257 1.058 Cumulative LDF 3.520 1.330 1.058 % Paid 28.4% 46.8% 19.3% 5.5% 2,907 5.5% 137 1.000 1 159.9 1. 137 2,846 24.8% 705.8 1 1. 414 634 137 159.9 1,344.9 839 1,117 414 137 839 1,117 414 137 839 1,117 414 137 826 1,288 634 826 1,288 634 826 1,288 634 681 1,459 681 1,459 681 1,459 728 728 728 1,117 1,288 1,459 1,344.9 705.8 2,562 71.6% 1,117 1,288 1,459 1,344.9 705.8 11,834.4 1. 5,914.7 You can double check this by adding up the individual cells. Copyright 2018 by The Infinite Actuary Page 13 Verrall A5 10

An alternative way to calculate the parameters. A student pointed out another way to calculate the s. It s more intuitive than the calculation proposed by the author. It s also completely analogous to the way we normally calculate LDF s. For an LDF we divide the losses in one column by the cumulative losses in the previous column. Here we do the same, but with rows. I do like it for its simplicity, but I hesitate to recommend it on the exam, since it s not how the paper calculates them. Here it is, using Example 2. Incremental Paid 839 1,117 414 137 826 1,288 634 681 1,459 728 826 1,288 634 1 839 1,117 414 12,748. 2,370 Incremental Paid 839 1,117 414 137 826 1,288 634 681 1,459 728 681 1,459 1 839 1,117 826 1,288 12,140. 4,070 Incremental Paid 839 1,117 414 137 826 1,288 634 681 1,459 728 728 1 839 826 681 728 1. 2,346 Incremental Paid 839 1,117 414 137 826 1,288 634 681 1,459 728 Copyright 2018 by The Infinite Actuary Page 14 Verrall A5 10

Section 6: Implementation Regarding the dispersion or nuisance factor,, in a full Bayesian analysis, we should estimate it along with the other parameters. However, for ease of implementation, we instead use a plug in estimate. The model requires that you provide prior distributions for each parameter. It is in this step that you control how much to rely on the priors, and how much on the data. If you select large variances for the priors, then the result will be based on the data, and will be very close to the chainladder method. On the other hand, if you select small variances for the priors, then the result will be similar to the BF method. Results of the Model The following data set of incremental paid claims (000) is used: Year\Age 1 2 3 4 5 6 7 8 9 10 1 358 767 611 483 527 574 146 140 227 68 2 352 884 934 1,183 446 321 528 266 425 3 291 1,002 926 1,017 751 147 496 280 4 311 1,106 776 1,562 272 352 206 5 443 693 992 769 505 471 6 396 937 847 805 706 7 441 848 1,131 1,063 8 359 1,062 1,443 9 377 987 10 344 The chain ladder results are: 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 Inc LDF 3.490 1.747 1.457 1.174 1.104 1.086 1.054 1.077 1.018 Cum LDF 14.446 4.139 2.369 1.625 1.385 1.254 1.155 1.096 1.018 Table 4: Year Paid to Date Chainladder Reserve Bayesian Mean Bayesian Std. Deviation Prediction Error (%) 1 3,901 2 5,339 95 94 111 118% 3 4,909 470 471 219 47% 4 4,586 709 716 264 37% 5 3,873 985 992 308 31% 6 3,692 1,420 1,424 375 26% 7 3,483 2,178 2,186 497 23% 8 2,864 3,920 3,935 791 20% 9 1,363 4,279 4,315 1,068 25% 10 344 4,626 4,671 2,013 43% Total 18,681 18,800 2,975 16% You can see that the Bayesian result (with vague priors) is nearly identical to the CL result. Copyright 2018 by The Infinite Actuary Page 15 Verrall A5 10

Section 6.2 Intervention in the chain ladder technique Now consider adding our opinion on the LDFs to the analysis. Opinion on Age 2 3 LDF Specifically, we believe the development factor from age 2 to 3, for Accident Years 7 to 10 is 1.500. Notice that years 7 & 8 have a 2 3 year LDF, and years 9 & 10 don t yet. We set the prior for that LDF, in those rows, to be 1.500. Consider two cases: (1) use a large variance, so that the parameter is based on the data, and (2) use a small standard deviation of 0.1, so that the prior mean has greater influence. The Historical LDF Triangle: Year 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 1 3.143 1.543 1.278 1.238 1.209 1.044 1.040 1.063 1.018 2 3.511 1.755 1.545 1.133 1.084 1.128 1.057 1.086 3 4.448 1.717 1.458 1.232 1.037 1.120 1.061 4 4.562 1.548 1.712 1.073 1.087 1.047 5 2.564 1.873 1.362 1.174 1.138 6 3.366 1.636 1.369 1.236 7 2.923 1.878 1.439 8 3.953 2.016 9 3.619 All Year Wtd Avg 3.490 1.747 1.457 1.174 1.104 1.086 1.054 1.077 1.018 The 2 3 LDF from the Chainladder method is 1.747. In case (1), the estimate is 1.971; this is due to the higher LDFs in years 7 & 8 compared to prior years, as seen in this table. Case (1) is giving credibility to the data, so the resulting LDF is based on the actual LDFs for years 7 & 8. In case (2), the LDF estimate becomes 1.673, pulling the LDF about halfway towards 1.500 from 1.971. The resulting reserves are: Chainladder Large Variance Small Variance Prediction Prediction Prediction Year Reserve Error % Reserve Error % Reserve Error % 6 & prior 3,693 17% 3,719 16% 3,703 17% 7 2,187 23% 2,196 23% 2,185 23% 8 3,930 20% 3,937 20% 3,932 20% 9 4,307 24% 4,998 27% 4,044 25% 10 4,674 43% 5,337 44% 4,496 43% Overall 18,790 16% 20,190 17% 18,360 16% The Large Variance method estimates a higher reserve, since it used the more recent higher LDFs to forecast from 2 3 years. Small Variance happens to be close to the CL, since the 2 3 LDF is also similar (1.673 vs 1.747). Notice that even as the estimates change, the %Prediction Errors are quite stable. Copyright 2018 by The Infinite Actuary Page 16 Verrall A5 10

Use Last 3 Diagonals to Select LDF Now consider using only the last three diagonals in the Bayesian model to determine the parameters. Most of the parameters have significant changes. The first one happens to stay nearly the same. The last 3 are based on 3 rows or less data, and thus don t change. 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 All Rows 3.527 1.751 1.460 1.175 1.104 1.087 1.054 1.076 1.018 3 Diagonals 3.579 1.852 1.393 1.155 1.085 1.099 1.054 1.076 1.018 It turns out that since some LDFs increase and some decrease the impact on the overall result is not large. The reduction is mostly due to year 8, where the estimate is reduced due to the reduction of the 3 4 factor. Table 8 Chainladder Bayesian 3 Diagonals Year Reserve Prediction Error Reserve Prediction Error 2 98 115% 95 115% 3 471 46% 469 46% 4 711 38% 713 37% 5 989 31% 1,042 30% 6 1,424 27% 1,393 27% 7 2,187 23% 2,058 24% 8 3,930 20% 3,468 22% 9 4,307 24% 4,230 27% 10 4,674 43% 4,711 47% Total 18,790 16% 18,180 18% Copyright 2018 by The Infinite Actuary Page 17 Verrall A5 10

Section 6.3 The Bornhuetter Ferguson Method Here consider intervention on the level of each row. Consider two examples. The first uses small variances, and will approximate the BF method. The second uses less strong prior information, and produces results that lie between the BF and CL methods. Use the following priors: 5,500,000 6,000,000 for 7 for 7 Assume each has a standard deviation of 1,000. Show the output of the model along with the standard BF. Bayesian Mean Reserve Bayesian Prediction Error Bayesian Prediction Error % BF Reserve Year 2 96 111 116% 96 3 483 212 44% 480 4 736 250 34% 737 5 1,118 297 27% 1,115 6 1,533 340 22% 1,527 7 2,305 410 18% 2,308 8 3,474 498 14% 3,467 9 4,547 555 12% 4,550 10 5,587 611 11% 5,585 Overall 19,880 1,854 9% 19,865 As expected, the Bayesian mean is very similar to the BF reserve, since we used strong priors. Notice that the prediction error has come down from 16% to just 9%. This is due to the low variance assumed in the priors. The model also produces the entire predictive distribution, in addition to the mean and variance. Now, use the same priors, but change the standard deviation to 1,000,000. Year Bayesian Mean Reserve Bayesian Prediction Error Bayesian Prediction Error % BF Reserve CL Reserve 2 95 112 118% 96 95 3 470 219 47% 480 470 4 717 266 37% 737 710 5 995 309 31% 1,115 988 6 1,431 377 26% 1,527 1,419 7 2,198 489 22% 2,308 2,178 8 3,839 727 19% 3,467 3,920 9 4,417 866 20% 4,550 4,279 10 5,390 1,080 20% 5,585 4,626 Overall 19,550 2,252 12% 19,865 18,681 The result is between the BF and the CL. The prediction error is again lower than the 16% for the original model (vague priors). This methodology provides the full spectrum between the Chainladder and the Bornhuetter Ferguson methods. Section 7: Conclusions This paper has shown how expert opinion, separate from the reserving data, can be incorporated into the prediction intervals for a stochastic model. The stochastic approach can also provide the full predictive distribution, in addition to the mean and variance. Copyright 2018 by The Infinite Actuary Page 18 Verrall A5 10

Exercises Obtaining Predictive Distributions for Reserves Which Incorporate Expert Opinions R. Verrall 1) You are doing a reserve study for a book of Auto business. The claims department has informed you that beginning in January 2009, they increased the automatic reserves for bodily injury claims. The process is to leave this automatic reserve open until either the claim is paid and closed, or if a suit is filed. They found that nearly all automatic reserves are closed by the time the accident year is 24 months old. It had been many years since the automatic reserve was updated, thus it was doubled at this time. What kind of model could you use to forecast IBNR? How could you specify the model? Incremental Incurred Losses Year 12 24 36 48 2006 1,413 1,338 948 281 2007 2,647 1,657 750 516 2008 3,407 1,666 1,061 128 2009 3,181 874 1,168 2010 2,726 842 2011 2,929 Incremental LDFs Year 12 24 24 36 36 48 2006 1.563 1.255 1.060 2007 1.698 1.186 1.108 2008 1.553 1.227 1.022 2009 1.221 1.242 2010 1.191 Copyright 2018 by The Infinite Actuary Page 19 Verrall A5 10

2) A Bayesian stochastic model was applied to the following losses. Estimate the unpaid losses using the row parameters provided. Incremental Paid Losses Year 12 24 36 48 gamma 1 607 749 232 64 1.000 2 1,614 1,218 510 260 2.293 3 1,854 1,776 74 1.658 4 1,546 770 1.379 5 1,382 1.227 Copyright 2018 by The Infinite Actuary Page 20 Verrall A5 10

3) Estimate the unpaid losses using the Bayesian approach to the Bornhuetter Ferguson method shown in Verrall. You are given that the prior distribution for the ultimate losses for each year has a gamma distribution with the following moments: Accident Year Expected Losses Standard Deviation 2009 8,000 1,500 2010 8,500 1,700 2011 9,000 1,800 Losses are fully developed at 48 months. An OD Poisson model was fit to this data, the dispersion factor is 50. Incremental Paid Losses Year 12 24 36 48 Paid to Date 2006 2,878 3,547 1,471 351 2007 3,767 2,447 1,768 781 2008 2,925 1,815 419 387 2009 2,725 2,509 173 5,407 2010 2,812 2,673 5,485 2011 5,046 5,046 12 24 24 36 36 48 Volume Weighted LDFs 1.860 1.169 1.072 12 24 36 Cumulative LDFs 2.331 1.254 1.072 Copyright 2018 by The Infinite Actuary Page 21 Verrall A5 10

4) You re doing a reserve study of homeowners losses in a state where your book of business is small. To complement the reserve study, you are going to use development patterns from a group of states in the same region, where you have had a larger book of business for many years. You expect this group of states is representative of what you d see in the new state. Regional LDFs 12 24 24 36 36 48 48 60 60 72 Incremental LDFs 2.183 1.405 1.092 1.038 1.018 12 24 36 48 60 Cumulative LDFs 3.539 1.621 1.154 1.057 1.018 Specify a model that takes into account the region s LDFs and allows us to calculate a prediction error. Data from New State Incremental Paid Losses Year 12 24 36 48 60 72 2006 86 296 151 108 38 29 2007 246 642 262 46 77 2008 344 134 351 66 2009 104 452 600 2010 584 575 2011 335 Incremental LDFs Year 12 24 24 36 36 48 48 60 60 72 2006 4.442 1.395 1.203 1.059 1.043 2007 3.610 1.295 1.040 1.064 2008 1.390 1.734 1.080 2009 5.346 2.079 2010 1.985 Copyright 2018 by The Infinite Actuary Page 22 Verrall A5 10

5) We are setting reserves for a Workers Compensation book. The exposure base used is annual employee wages. Due to significant changes in the market, starting in early 2010, many clients renegotiated the wages of their employees. You have estimated that the average wage is now 40% lower than it was in 2009 & prior. This is a rough estimate, based on discussions with a few clients. Some of the coverages provided are dependent on the wage of the employee, a large portion is not. Prior to 2010, this book was stable, with average losses per $100 payroll very steady. Using the Bayesian procedures Verrall presented, how could we setup a model to estimate the unpaid claims? Year Annual Wages (000) 2006 50,000 2007 52,000 2008 54,000 2009 51,000 2010 36,000 2011 29,000 Incremental Losses Accident Year 12 24 36 48 2006 650 832 497 285 2007 663 601 434 250 2008 569 734 445 285 2009 697 780 370 2010 567 689 2011 607 Copyright 2018 by The Infinite Actuary Page 23 Verrall A5 10

6) Given the following incremental Paid Losses a) Calculate the row parameters for each accident year. 1 2 3 4 1 363 802 530 54 2 570 1,072 208 3 378 1,396 4 257 b) Using the Chainladder method, forecast the incremental losses for Accident Year 4, at each age c) Using the row parameters, forecast the incremental losses for Accident Year 4, at each age Copyright 2018 by The Infinite Actuary Page 24 Verrall A5 10

7) We are given the following Incremental Paid Triangle, along with the cumulative LDFs (volume weighted). 1 2 3 4 5 6 7 8 9 10 Paid to Date 1 388 1,053 1,157 822 382 371 220 166 130 39 4,728 2 327 1,661 1,053 544 520 236 308 259 28 4,936 3 538 1,135 1,630 223 400 346 99 218 4,589 4 790 872 849 902 303 367 (27) 4,056 5 315 1,457 975 771 187 339 4,044 6 161 1,400 226 510 646 2,943 7 360 845 354 1,108 2,667 8 225 427 1,201 1,853 9 264 1,156 1,420 10 98 98 Cum. LDF 11.180 2.816 1.735 1.357 1.208 1.113 1.074 1.025 1.008 1.000 a) Calculate the row parameters for the first 5 accident years. Copyright 2018 by The Infinite Actuary Page 25 Verrall A5 10

Exercises Solutions Obtaining Predictive Distributions for Reserves Which Incorporate Expert Opinions R. Verrall 1) We d like to separate the LDF for ages 12 to 24 for the accident years 2009 and 2010 from prior accident years. We can use a Bayesian model incremental losses are Negative Binomial. We ll define two column parameters for dev t from 12 to 24 the column parameter for development from ages 12 to 24 for AY 2008 & prior the column parameter for development from ages 12 to 24 for AY 2009 & later the column parameter for the other columns Each has a prior distribution. Select the Gamma distribution. Since we were not provided additional information about the development pattern; use vague priors (large variances) in the Gamma distribution, this way the parameters will be selected from the loss data. Copyright 2018 by The Infinite Actuary Page 26 Verrall A5 10

2) Year 12 24 36 48 Unpaid losses 3 213 213 4 309 204 513 5 1,024 255 168 1,447 2,173, 64 260 1.658 1 213, 64 260 213 1.379 1 204, 64 260 213 204 1.227 1 168, 232 510 74 1.379 1 309, 232 510 74 309 1.227 1 255, 749 1,218 1,776 770 1.227 1 1,024 Copyright 2018 by The Infinite Actuary Page 27 Verrall A5 10

3) Find for each year Accident Expected Standard Year Losses Deviation 2009 8,000 1,500 0.00356 28.44 2010 8,500 1,700 0.00294 25.00 2011 9,000 1,800 0.00278 25.00 ; Need to find the CL estimate of unpaid losses in each cell, and the BF estimate too, and finally Z in each cell. Chain Ladder Year 24 36 48 2009 389 2010 927 462 2011 4,340 1,586 790 2009: 5,407 1.072 1 389 2010: 5,485 1.169 1 927; 5,485 927 1.072 1 462 2011: 5,046 1.860 1 4,340; 5,046 4,340 1.169 1 1,586; 5,046 4,340 1,586 1.072 1 790.. 36.8%; 2009: 8,000*6.7%=536 2010: 8,500*13.5%=1,148 Bornhuetter Ferguson Year 24 36 48 2009 536 2010 1,148 570 2011 3,321 1,215 603 Expected Paid 36.8% 13.5% 6.7% 13.5%; 1... 6.7% Z Year 24 36 48 2009 0.840 2010 0.844 0.864 2011 0.755 0.852 0.870 Copyright 2018 by The Infinite Actuary Page 28 Verrall A5 10

42.9% 79.8% 93.3%..., 93.3% 0.00356 50 93.3% 93.3% 0.840 17.8% 93.3%, 79.8% 0.00294 50 79.8% 79.8% 14.7% 79.8% 93.3%, 0.864 14.7% 93.3% 0.844 On the exam, I wouldn t right these next three calculations, since the ones above show I know how to do it. 42.9%, 0.00278 50 42.9% 42.9% 0.755 13.9% 42.9% 79.8%, 0.852 13.9% 79.8% 93.3%, 0.870 13.9% 93.3% Finally, calculate the estimate using: Z*CL+(1 Z)*BF Year 24 36 48 Unpaid Losses 2009 413 413 2010 961 477 1,438 2011 4,090 1,531 765 6,386 8,237 389 0.840 536 1 0.840 413 927 0.844 1148 1 0.844 961 4,340 0.755 3,321 1 0.755 4,090 Copyright 2018 by The Infinite Actuary Page 29 Verrall A5 10

4) We want a model that takes into account the small states data, but at the same time gives a large amount of weight to the more credible LDFs from the region. Use a Bayesian model with stochastic column parameters. The incremental losses are Negative Binomial. Use a prior distribution for the column parameters. The mean of the prior distribution for each parameter will be the regional LDF. Put a relatively small variance around these parameters, so that the analysis puts a significant amount of weight on the regional LDFs. Copyright 2018 by The Infinite Actuary Page 30 Verrall A5 10

5) Due to the distorted payroll figures, we cannot use a BF type approach for accident years 2010 and 2011. We could use a Bayesian approach, with stochastic row parameters. Use OD Poission for the incremental paid losses The row parameters could have a Gamma distribution. The mean of the parameters could be the historical losses / annual wages (taking into account any trend and the 40% wage reduction). For older years, we could use a small variance due to the steady loss rate. For 2010 and 2011 we want to use a very large variance to recognize that the loss rate is not going to be the same as prior years this way we ll put more weight on the actual loss emergence. Copyright 2018 by The Infinite Actuary Page 31 Verrall A5 10

6) Need Chainladder Ultimate, and % Unpaid by Accident Year 1 2 3 4 % Unpaid Ultimate Paid to Date 1 363 802 530 54 1,749 1,749 2 570 1,072 208 3.1% 1,909 1,850 3 378 1,396 23.3% 2,312 1,774 4 257 78.0% 1,170 257 Inc LDF 3.494 1.263 1.032 Cum. LDF 4.554 1.303 1.032 1,909 3.1% 54. 1 59.2 1. 54 2,312 23.3% 530 208 54 59.2 1538.7 1. 851.2 1,170 78.0% 912.6 1 1. 802 1,072 1,396 851.2 538.7 4,659.0 Notice the denominator is made up of these pieces: 363 802 530 54 363 802 530 54 363 802 530 54 570 1,072 208 570 1,072 208 570 1,072 208 378 1,396 378 1,396 378 1,396 257 257 257 802 1,072 1,396 851.2 538.7 You only have to add 5 numbers, rather then nine. b) c) 1 2 3 4 Cumulative 257 898.0 1,134.2 1,170.5 Incremental 641.0 236.2 36.3 257 3.494 898.0 898.0 1.263 1,134.2 1,134.2 1.032 1,170.5 802 1,072 1,396 1.196 1. 530 208 1.633 1 467.2 530 208 467.2 1.196 1. 54 59.2 1.633 1 71.7 54 59.2 71.7 1.196 1. Notice the result is identical to Chainladder. Copyright 2018 by The Infinite Actuary Page 32 Verrall A5 10

7) Eg. 4,589 1.025 4,704 6 7 8 9 10 % Unpaid Ultimate 371 220 166 130 39 0.0% 4,728 236 308 259 28 0.8% 4,975 346 99 218 2.4% 4,704 367 (27) 6.9% 4,356 339 10.2% 4,501 1 2.4%.. 4,975 0.8% 11 39.8. 39 39 4,704 2.4% 130 28 39 39.8 1112.9 1. 236.8 4,356 6.9% 166 259 218 236.8 112.9 1300.6 1. 992.7 4,501 10.2% 220 308 99 27 992.7 300.6 459.1 1 1. 1893.3 Copyright 2018 by The Infinite Actuary Page 33 Verrall A5 10

2011 No Problems. Past Exam Problems 2012 #8 (3.75 pts) Copyright 2018 by The Infinite Actuary Page 34 Verrall A5 10

Copyright 2018 by The Infinite Actuary Page 35 Verrall A5 10

2013 #9 (3.5 points) An actuary is considering a Bayesian approach to developing a predictive loss distribution based on the deterministic chain ladder method using the all years weighted average link ratios. a. (0.5 point) To produce results that closely resemble the deterministic chain ladder outcome, explain whether the actuary should select high or low variances for the prior distributions of the link ratios. b. (0.5 point) The actuary decides to override the link ratios suggested by the data for the 36 48 month maturity interval with a judgmental selection. However, the actuary is less confident of this selection than of the all years weighted averages used for the other maturity intervals. Describe how to change the prior distributions of the link ratios to adjust for this. c. (1 point) Discuss the effect of the change implemented in part b. above on the simulated results, addressing both the mean and the prediction error. d. (1 point) Describe one advantage that a Bayesian approach has over a bootstrapping algorithm and one advantage that a Bayesian approach has over the Mack method. e. (0.5 point) Discuss a modification to the Bayesian framework for the chain ladder method so that it applies to the Bornhuetter Ferguson method. Copyright 2018 by The Infinite Actuary Page 36 Verrall A5 10

2014 #10 (1 pt) 2015 No Questions on the 2015 exam Copyright 2018 by The Infinite Actuary Page 37 Verrall A5 10

2016 #11 (2.5 pts) Copyright 2018 by The Infinite Actuary Page 38 Verrall A5 10

2017 # (2.5 pts) Given the following information: Cumulative reported losses at 24 months for accident year 2015 are $9,000 The following reported development factors were derived: Loss Development Factor Cumulative Development Factor % Reported Age 12 2.000 3.000 33.3% 24 1.364 1.500 66.7% 36 1.073 1.100 90.9% 48 1.025 1.025 97.6% 60 1.000 1.000 100.0% Incremental losses,, follow an over dispersed Poisson distribution with mean and variance The variable represents the expected ultimate losses for accident year. The variable represents the proportion of ultimate losses that emerge in development year. The prior distribution for is gamma with mean and variance The dispersion parameter, for the over dispersed Poisson distribution is 9.125 The accident year 2015 estimates for and are 100 and 0.01234 respectively. The mean of for this Bayesian model is: 1, 1 1, where is the incremental chain ladder loss development factor for development year. is the cumulative losses for accident year as of development year. a. (2 points) Calculate the incremental losses for accident year 2015 expected to emerge between 24 and 48 months of development using the model. b. (0.5 point) Identify and briefly describe what parameter in the model would have to change in order to produce IBNR estimates closer to chain ladder indications. Copyright 2018 by The Infinite Actuary Page 39 Verrall A5 10

2012 # 8 (3.75 pts) Past Exam Solutions Need to find all the terms that go into 5,850 1,500 7,350 100 8,749 0.01143 1.112 0.01143 % 24. LDF 24 to Ult =1.112 1.035 1.006 1.158 1 1.158 86.4% The formula for in the paper is different using the one from the paper. 86.4% 0.900 86.4% 0.01143 8.429 The formula for is different in the text. Using the one here. 0.900 1.112 1 7350 1 0.900 1.112 1 8,749 1.112 0.900 823.2 1 0.900 881.2. If we used the formula in the text, we would use for the the term on the right side 0.900 1.112 1 7350 1 0.900 1.112 1,. % 0.900 823.2 1 0.900 846.7. b) The term serves as a Credibility Weight. It gives weight to the 823.2 above. This is the Chainladder estimate it takes the losses paid to date, and projects the next period. The complement of is the weight given to 846.7. This estimate of losses is based on the mean, thus this is like the a priori of the BF method. This problem had two errors in the question. One was the formula for Z, and the other was the formula for the mean. These should have been: 1 1 1 The last term, I prefer to write as:. If a formula doesn t look right to me in the exam, I will always write clearly what assumption I am making. In this problem, I calculated as described in the text, and as written in the question. Copyright 2018 by The Infinite Actuary Page 40 Verrall A5 10

Examiner Comments The model solution is based on the original paper interpretation of calculating rather than the formula in the exam. There were a couple of differences between the formula on the exam and the formula in the paper. Because of the potential confusion, a number of model solutions were given full credit. The formula for the mean of for the Bayesian model is slightly different from the exam problem. Instead of, the correct formula is. Candidates who used this formula instead will receive full credit as well. Additionally, candidates who used alternative formulas to calculate the BF estimate were given full credit as long as the method was accurate. Additionally, the top of the summation term when calculating should have been 12, rather than 13. Most candidates ignored this difference but both answers were accepted. Some of the commn errors where candidates lost points include: Using the incremental LDF, or percent reported between 24 to 36 months, to calculate Calculating using the Chainladder method rather than using and Incorrect calculation of (eg. using 36 to Ultimate, rather than 24 to Ultimate) The majority of candidates had little problem with the b. part. To receive full credit, candidates must discuss how the formula is a credibility weighted average of the chain ladder or BF method and must identify which part of the formula was Chainladder, and which part of the formula was BF. Copyright 2018 by The Infinite Actuary Page 41 Verrall A5 10

2013 #9 (3.5 points) a) High variances. A high variance on the prior implies we have little confidence in the priors, and thus the model will select an outcome mostly based on the actual data b) Since he wants to override, he should put his selected LDF as the prior, and put a smaller variance around his prior at 36 48, then the other priors. He shouldn t make the variance too small though, given his lack of confidence in the selection. This is the answer I put, and would still put. Note that it s different from what the CAS wanted. I believe the CAS answer is contradictory it s assuming the actuary doesn t believe the data and thus picks his own LDF, and at the same time doesn t believe his own LDF. He doesn t have much confidence in either, in which case it s not clear which should receive more weight. Their preferred answer puts more weight on the actual data I went the other way, and put more weight on the actuary s selected LDF. CAS Sample Solution 1: Let the distribution for the LDF36 48 have the actuary s opinion as the expected value, but leave a wide variance around the estimate. This will let the model consider the actuary s selection to some degree, but will still use the historical data to determine the parameter. CAS Sample Solution 2: Put a distribution with a mean equal to his selection, but with large variance. For example, for the 36 48 interval LDF, = 1.5, ( ) = ; Examiner Comment: In order to receive full credit candidate responses had to include the use of high variance for the 36 48 LDF interval due to the actuary s high uncertainty with the selected LDF. c) The additional weight put on the prior LDF will change the mean loss (up if the LDF is higher, down otherwise). The prediction error will come down, since we are putting a lower variance around the prior. This is based on my answer to b); which would be different had I answered as the CAS expected. CAS Sample Solution 1: It will pull the LDF36 48 closer to the actuary s estimate. Because the LDF s distribution specified a large variance, prediction error will similar to the chain ladder, though probably larger as the variance selection is large. CAS Sample Solution 2: The simulated results will incorporate the expert opinion for the 36 48 LDF, and the prediction error will be higher since we are less confident in our selection for this LDF than the weighted average. Examiner Comment: In order to receive full credit candidate responses had to discuss the effect of the change in b. on mean and predication error of the simulated results. A candidate s response may receive part c. credit even if they did not receive full credit for part b. d) Bootstrapping: It allows the actuary to provide his expert opinion on the unpaid losses, while maintaining the integrity of the variance estimate of the unpaid losses Copyright 2018 by The Infinite Actuary Page 42 Verrall A5 10

Mack Method: Mack provides a mean and variance for the unpaid losses (specifically, for each row). The Bayesian method provides a full distribution of the unpaid losses, not just the first 2 moments. e) You would put very small variances around the prior s used for the row parameters. The closer you want to be to BF, the smaller variance should be used. CAS Sample Solution 1 a) High Variances If we are not confident in our prior estimates, a wide variance will cause the model to output parameters based on the actual data rather than our prior opinion. b) Let the distribution for the LDF36 48 have the actuary s opinion as the expected value, but leave a wide variance around the estimate. This will let the model consider the actuary s selection to some degree, but will still use the historical data to determine the parameter. c) It will pull the LDF36 48 closer to the actuary s estimate. Because the LDF s distribution specified a large variance, prediction error will similar to the chain ladder, though probably larger as the variance selection is large. d) Over bootstrapping: Can insert your opinion into the parameter selection without much difficulty. Over Mack: We will get a full distribution of the loss estimate, not just the first two moments. e) We could insert row parameters, one for each accident year, and specify relatively tight variance around them. This will make the model use the prior estimate more, as in BF where the reserve estimate is based on our prior expected LR. CAS Sample Solution 2 a) Use large variance for the prior distribution to put more weight on the chain ladder outcome. The larger variance reflects we are not as confident in our prior distribution. b) Put a distribution with a mean equal to his selection, but with large variance. For example, for the 36 48 interval LDF, = 1.5, ; c) The simulated results will incorporate the expert opinion for the 36 48 LDF, and the prediction error will be higher since we are less confident in our selection for this LDF than the weighted average. d) Compared to bootstrapping, the Bayesian approach can incorporate expert knowledge into the selection of the ratios. Compared to Mack, the full predictive distribution can be easily calculated and we can calculate the prediction error as the square root of the MSEP of the distribution. e) If we use really strong priors (i.e. low variance) for the row parameters, this allows us to set row parameters equal to the BF estimate of ultimate for each year. This will replicate the BF method. Examiner Comment a) Responses with low variance were not given credit. b) In order to receive full credit candidate responses had to include the use of high variance for the 36 48 LDF interval due to the actuary s high uncertainty with the selected LDF. c) In order to receive full credit candidate responses had to discuss the effect of the change in b. on mean and predication error of the simulated results. A candidate s response may receive part c. credit even if they did not receive full credit for part b. d) Candidates were given full credit for response which describing the same advantage of Bayesian over both. For example full credit could be given for response which identified and described the Copyright 2018 by The Infinite Actuary Page 43 Verrall A5 10

accommodation of actuary s expert opinion in the predictive distribution for reserves afforded by Bayesian approach is an advantage over both Mack and Bootstrapping. e) Overall this question part appeared to be the most difficult for candidates. Candidate s responses frequently discussed the BF modification to the chain ladder method but did not address predictive or stochastic features. Credit was not given to candidates who only discussed the deterministic, nonstochastic BF model modification to the chain ladder. Copyright 2018 by The Infinite Actuary Page 44 Verrall A5 10

2014 #10 (1 pt) a) i) Experts opinion can be used in selecting loss development factors this may be used when payment patterns are changing due to a change in process, and these changes haven t made it into the historical data yet. ii) Experts opinion can be used in selecting row parameters, often expected Ultimate losses. This may be useful when there is a change in expected losses that has not yet shown up in the data. b) Large means smaller variance. In this case, if it s used to describe a parameter; a large will give less weight to the data, and more weight to the expert opinion (a priori for Bornhuetter Ferguson) The Credibility Formula: A large implies a smaller, that is more weight to expert opinion. 1 EXAMINER S REPORT Part a The candidate was expected to identify two types of expert opinion that could be used to modify model results. In order to achieve full credit, the candidate had to identify an issue that an expert may have commentary on (e.g., a claims manager might have input on claim process changes) and how that opinion would/could impact the model (e.g., judgmentally select an LDF rather than relying on model output.) Many candidates received only partial credit because they did not clearly identify the expert/expert s view but limited their answer to what model parameter would change (e.g., Could make manual adjustments to empirical age to age factors. ) Part b The candidate was expected to understand the relationship between the β parameter and the relative weight given to the BF vs. Chain ladder methodologies. The majority of candidates received full credit for this part. Generally, candidates who did not receive full credit either had the relationship reversed or simply failed to attempt the question. There were a few candidates who discussed the β parameter and its relationship to credibility and/or variance but did not state the final effect on the weighting of the BF and Chain Ladder. Copyright 2018 by The Infinite Actuary Page 45 Verrall A5 10