Modeling Today and Tomorrow’s Risk: One Insurance Company’s Strategy

In a recent white paper from Government Entities Mutual, (GEM) Inc., which writes Liability, Workers’ Compensation, and Property reinsurance coverage, underwriting manager Joel Kress posed the question, “how risky are we?” 
To answer this question, Kress and his team decided to simply model the most detrimental and most quantifiable risks: Underwriting Risk and Reserve Development Risk. For Underwriting Risk, they sought to quantify their annual risk transfer contracts. For Reserve Development Risk, they outlined and measured the risk associated with all past contracts they had written. Since GEM is almost 10 years old, they knew there would be years (and decades) of further Incurred But Not Reported (IBNR) development on GEM’s balance sheet. This type of risk accumulates geometrically as the years move on.
Since GEM’s loss experience alone was limited and thus statistically non-credible, Kress and his team supplemented this data with loss experience from other industry reinsurance data. With this combination, they were able to create a single loss distribution, which statistically estimates the company’s predictability of loss.
Using @RISK's Monte Carlo simulation, GEM then created a profile for each contract written in the  most recent policy year (2011), and distilled all the information from each contract into exposure to loss, which is simply frequency x severity, that GEM held as the risk bearing captive. Kress and GEM actuaries then estimated the risk for the historical policy periods by using the selected loss distribution to measure the variability around the expected loss reserves. This variability or, of greater concern, the variability of losses costing more than expected, was the third piece to GEM’s risk metric. GEM’s selected loss distribution looked like many other (re)insurance loss distributions–skewed towards the  right, indicating a chance, albeit slim, of a large, calamitous loss. 
The majority of this risk came from contracts currently being written, since the insurable events have not yet occurred. Turning to @RISK again, Kress and his team used the  input variables to estimate GEM’s  losses for the current policy year’s contracts, and then ran the algorithm for 10,000 hypothetical policy years. From this tome of data, they were able to determine key statistical metrics. 
Once all the simulations were finished, it was time to measure the results. GEM used Surplus as a measuring stick since it is easily understood, readily calculable, and of concern to most interested parties. GEM found that at a 60% Confidence Level, their Surplus would need to make up a $965,000 shortfall in losses. Thus with this risk they modeled, the amount of extra money from GEM’s current and historical contracts will cost beyond what is expected.
The last step in this process was to use five statistical benchmarks of ruin to measure themselves against. These benchmarks included the total Captive’s Contributed Capital, Company Action Level, Regulatory Action Level, Authorized Control Level, and Mandatory Control Level. GEM was able to assign chance percentages to all these potential risks, ranging from 17.2% and 0.4%.
Thus, using @RISK, Joe Kress and GEM were able to assess their risk for current and future books of business. According to Kress, “None of this minutia would be possible without the power of computers. It is one thing to program an algorithm to do a set of tasks, as outlined above. It is another thing entirely to make the computer work for you.”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s