Why Modeling? For lines of business with catastrophe potential, we don t know how much past insurance experience is needed to represent possible future outcomes and how much weight should be assigned to each year s experience. Reliance on actual insured experience doesn t allow accurate measurement of future expected loss. We need to use a longer experience period, especially for frequency. Computer simulation can be used not only to measure losses, but to develop risk loadings to compensate for variation in outcomes. What to Model Our goal is a model that simulates what could realistically occur, based on information relative to the geographic area being considered. For frequency, there s a long history to help gauge the relative likelihood of landfall in a given area. For severity, older storms don t offer any useful insured loss information. The same storm in 1970 would have a very different impact today. A computer simulation model for the hurricane peril can take the characteristics of a storm and replicate the wind speeds over its course after landfall. Validation of the model examines actual loss experience obtained from storms that have occurred over the recent past. How to Model for Severity The severity component comprises three distinct modules: 1) Event Simulation Science 2) Damageability of Insured Properties Engineering 3) Loss Effect on Exposures Insurance Module 1 reproduces natural phenomena. Module 2 estimates the damage sustained by a given property exposed to the simulated event. Module 3 incorporates the results of the first two modules, and adjusts for factors such as deductibles, coinsurance, insurance to value, and reinsurance. Module 3 is the company specific module; it incorporates the factors that describe an insurer s in-force book of business. Module 3 is also used for risk analysis. The severity component is usually deterministic. The compute simulates that event today, with the resulting losses to insured exposures. For any particular set of parameters, the losses will be stochastic. We use damage curves to represent the average loss results. How to Model for Frequency Deterministic catastrophe models are not appropriate for ratemaking, because for ratemaking, we need to incorporate relative frequency or the probabilities of each type of storm. We add a frequency component to the hurricane model by analyzing long term weather records of hurricanes, supplemented with informed judgment from experts. The past data are fitted to derive probability distributions of the key input parameters. Sampling techniques such as Monte Carlo randomly select the parameters from each distribution. Monte Carlo assigns an equal probability to all sampled items from the entire population. One drawback is the lack of precision in estimating unlikely events. We can overcome this by either generating a very large sample size or by stratified random sampling. Stratified sampling allows a more accurate estimation of their distribution, considering homogeneity. We can combine these estimates into a precise estimate of the overall population with a smaller sample size. Another benefit is the ability to sample a larger number of events in each strata than their relativity probability in the overall population. We must also develop the storm path and landfall location for each modeled storm, based on actual historical events over history and on other available sources. The results are combined, the probabilities are conditional because they refer to the likelihood of a hurricane of a certain size, once a hurricane makes landfall. The end result is the probabilistic library, which comprises a large enough number of events to represent all likely scenarios. Basic Output of Model We calculate expected loss costs directly for the base class risk in a geographic local by running the event library against a base class house at the center of each zip code. We divide this result by the amount of insurance to produce an expected loss cost for each ZIP. Annual expected loss costs for a zip code are obtained by multiplying the sum of the probability-weighted simulated results across all storms by an annual frequency EL ZIP = F (P Storm E ZIP DF storm ) STORM Expected Losses = Frequency * Sumproduct(Probability of Storm, Exposure in Zip Code, Damage Factor for base class) Page 1 of 5
Loss adjustment expenses for catastrophes are generally related to the overall level of losses, so it s appropriate to include them in the expected losses as a percentage of total losses. Convert to a loss cost (expressed as a rate per $1000 of Coverage A): ELC ZIP = EL ZIP COVA ZIP 1000 This calculation is independent of individual company data, so it s appropriate for each insurer. Attempting to use ones own experience would yield only average loss costs by zip code, which fails when there s a disproportionate amount of exposure in a particular set of zip codes. In catastrophe ratemaking using computer modeling, large volumes of industry loss experience have been used to calibrate frequency and severity, so the value of an individual insurer s actual loss experience is limited. It may be such a small subset of the total industry loss that it would lead to very low credibility results. Our next step is to get the base class loss costs for the territory structure. Once the zip code groupings are selected, the loss costs for the new territories can be calculated: ZIP ELC ZIP COVA ZIP ELC TERR = COVA ZIP ZIP It is likely that the more appropriate territory structure for hurricane will differ from regular homeowners territories. Attributes of Loss Costs via Computer Modeling The individual ZIP codes are fully credible because the inputs have theoretically accounted for all the useful information. We don t want to assign the complement to actual results, because it could bias the answer because of too much randomness. The model substitutes the random variation of low-frequency actual storms with the use of a reasonable set of possible storms. This means that the random statistical variation can be resolved to minimize the process risk from a ratemaking standpoint. There is still parameter risk in the selection of the key variables. Overcoming this risk is the goal of additional scientific research. The answer to parameter risk is not to abandon modeling, but to continually look for better input parameters. The pure premium method (used for catastrophe ratemaking) allows the calculation of loss costs in refined detail directly, using the models frequency and severity features. Hurricane loss costs derived from modeling don t need frequent updates for two reasons: (1) Another year of actual results is unlikely to change the parameters much. (But, in the early years, the potential is there to update some of the damage factors. (2) Once adequate rate levels are achieved, annual updates aren t needed because the exposure base (amount of coverage A) is inflation sensitive. While non-hurricane loss costs vary greatly by fire protection class, the hurricane peril is obviously independent of this. Policy form relativities increase as additional perils are covered. If the hurricane loss costs are a material portion of total homeowners costs, the policy form relativities would have to vary substantially by territory. The relative fire resistance of the construction is essentially irrelevant for the hurricane peril. The hurricane peril needs its own class plan, because of different risk variation from the traditional covers. Hurricane relativities may not be uniformly multiplicative or additive. To calculate indicated classification factors: 1) Run the model on a single house in each zip code, varying the house based on different resistance characteristics 2) Derive the relationships to the base class in ranges of relativities 3) Select average relativities that form the dominant pattern from the map illustrations If the insurer printed all the rates by territory, instead of just the base class rates, then more flexibility could be allowed in the relativities. Form of Rating The hurricane rate should be split out from the previously indivisible premium for homeowners, and it should also have its own class plan. The difficulty of an overall loss experience review suggests that we unbundle the homeowners rates, and use the pure premium method for hurricane ratemaking, and use the loss ratio approach for other perils. Page 2 of 5
Since loss costs are supplied by modeling, and we have a separate rate for each catastrophe peril, the actual catastrophe losses only need to be removed from experience period, and nothing needs to be loaded back to the normal homeowners losses The advantages of separate catastrophe rates: 1) The simplification of the normal coverage rating and ratemaking 2) Better class and territory rating of the catastrophe coverages However, if hurricane loss costs are left in the indivisible premium, the homeowners classes will be much more complicated to rate. Another simplification comes from the elimination of a complicated set of statewide indications including hurricane. The indications can be produced, and the actual rates can be selected, separately. Statutory requirements are for rates to be reasonable, not excessive, inadequate, or unfairly discriminatory. This doesn t mean that the rate filing should suppress the estimate of statewide rate changes, but when we begin to calculate the different rates via different methods, it s less obvious what the total indication is. Expense Load Considerations Reinsurance premium can be expressed as a function of the primary layer, and added to the equation(s). Some portion of catastrophe treaty reinsurance premium should also be considered as part of the reinsurance cost. The total expected hurricane loss costs need to be adjusted to exclude the reinsured portion by having the hurricane computer model simulate the reinsurance layer. L XS = MIN MAX ZIP E ZIP DF STORM RETENTION, 0, LIM L XS,ZIP = L TOTAL,ZIP L XS L TOTAL EL XS,ZIP = F storm P storm L XS,ZIP The reinsurance premium can then be allocated to line of business. Premiums are then ratioed to the primary premium to get a factor to add to the indicated rate. The remaining expected loss costs outside the reinsurance layer would then be loaded for risk margin and expenses. The passthrough would already have included the expenses and the risk margin of the reinsurer. Risk Load Considerations Splitting the premium also allows us to split the calculation of a risk margin. This makes the non-catastrophe component easier to price. Once a target margin is selected for the non-catastrophe component, the margin for the catastrophe piece can be calculated as a multiple of the non-catastrophe component, by assuming that the profit should be proportional to the standard deviation of the losses. Calculating risk load should be performed on a basis net of reinsurance, because we re building the reinsurance premium back into the rates. But, calculating on a gross & net basis allows us to evaluate our reinsurance protection by considering the total risk load required. The risk margin can be expressed as a direct function of the ratio of CVs. Risk Margin CAT = Risk Margin NONCAT CV CAT CV NONCAT Deriving Hurricane Base Rates BCR TERR = ELC TERR 1 + P + R 1 C GE T + I C GE T I P R =commission (% of premium) =General Expenses (% of Premium) =Taxes, Licenses & Fees (% of premium) =Investment income offset (% of premium) =Profit & Contingences (% of Losses) =Catastrophe Reinsurance Cost per $1000 of Cov A (If the insurer decides to pass through the cost of CAT RE) Page 3 of 5
Another benefit to splitting the rates is in the treatment of expenses. Since the hurricane coverage is intended to be part of the homeowners policy, fixed experience that are part of the non-hurricane policy must not be double-counted. Rate Filing Issues Steps of the regulatory approval process: 1) Review general design of the model 2) Evaluate event simulation module 3) Test ability of module to simulate known past events 4) Check distributions of key input variables 5) Perform sensitivity checks 6) Verify damage and insurance relationship functions 7) Test output for hypothetical new events 8) Compare different modelers results for loss costs 9) Conduct on-site due diligence and review Appendix A: How to Construct a Model 1. Science Module a. Incorporate the physics of the natural phenomena in a module that simulates as closely as possible the actual event b. Must be tested before its use to reproduce historical events & simulate hypothetical or probabilistic results c. Should be tested for reasonableness by predicting wind speeds for hypothetical events, to evaluate the sensitivity of the model d. Predictive accuracy is limited by the fact that data are not captured for some factors that may affect an individual property. e. We shouldn t expect a model to exactly reproduce a historical event, but we should verify that it can adequately simulate hypothetical events with a given set of parameters f. Actual future events don t require major modifications, but provide additional information to further refine it. 2. Engineering Module a. Damageability functions are needed to estimate the damage to a property subject to an event of a given intensity b. Functions should vary by line of business, region, construction, and coverage c. Accuracy is improved by analyzing actual past events d. On-site visits to the locations of catastrophes can provide additional insight to the modeler for identifying future classification distinctions. e. Refinement is dependent on input generally provided by the engineering community 3. Insurance Module a. Science and engineering modules must be integrated with the insurance module to determine the resulting insured loss from a given event b. Must develop & maintain a database of in-force exposures that captures the relevant factors that can be used in assessing the damage to a given risk. 4. Validation a. Modeler must verify how the modules interact by completing an overall analysis of the results b. Actual incurred loss experience is the obvious candidate to be used in testing modeled losses. c. One Issue raised is demand surge i. Should NOT be incorporated in the damage functions ii. They would inappropriately increase the expected level of future losses. Appendix B: How Other Perils are Modeled 1. Earthquake a. Precision level will not reach that of hurricane models b. Insurance module is similar to that of a hurricane model c. Use of percentage deductibles and separate coverage deductibles present a new challenge d. Models must have the capacity to handle various deductible combinations e. Insured loss data available to validate are more limited than for hurricanes f. Two different types of earthquake are different by nature, and event generator must vary to reflect the different types of shaking intensities g. Serious damage can be caused by earthquakes not located on known fault systems i. Frequency is unknown ii. Inclusion of this type of event could drastically increase modeled loss costs Page 4 of 5
2. Tornado & Hail a. Actual loss experience is more readily available than for any other type of natural catastrophe b. Traditional way of developing has been to smooth the actual experience over a number of years c. This doesn t capture the essence of why we catastrophe model: to estimate the loss potential of a company GIVEN ITS CURRENT EXPOSURE DISTRIBUTION d. These are more sudden and unpredictable than hurricanes; most historical information comes from observation. e. Damage relationships at a given windspeed for a tornado are much different from those of a hurricane f. Development of a hail model resembles that of a tornado model g. Validation of these models is dependent on the availability of loss data and on how much differentiation between the two perils is possible. 3. Windstorm a. Some of the same characteristics as hurricanes prompt the use of a catastrophe model to simulate winter storms b. Winter storms don t have a specific unit of measure that describes the intensity of a given event c. Damage functions associated with winter storms are very different from those of other perils, because little of the damage is structural d. Creation of a probabilistic database requires simulation of multiple events e. Motivation to develop computer models has not been as high for risk analysis and development of PMLs. f. Computer modeling does yield better expected loss estimates, and allows the exclusion of past catastrophes from the normal homeowners ratemaking database for better stability in rate level indications. Page 5 of 5