Effects of Simulation Volume on Risk Metrics for Dynamo DFA Model

Abstract
Of necessity, users of complex simulation models are faced with the question of “how many simulations should be run?” On one hand, the pragmatic consideration of shortening computer runtime with fewer simulation trials can preclude simulating enough of them to achieve precision. On the other hand, simulating many hundreds of thousands or millions of simulation trials can result in unacceptably long run times and/or require undesirable computer hardware expenditures to bring run times down to acceptable levels. Financial projection models for insurers, such as Dynamo, often have complex cellular logic and many random variables. Users of insurance company financial models often want to further complicate matters by considering correlations between different subsets of the model’s random variables. Unfortunately, the runtime / accuracy tradeoff becomes even larger when considering correlations between variables.

Dynamo version 5, written for use in high performance computing (HPC), as used for this paper, has in excess of 760 random variables, many of which are correlated. We have used this model to produce probability distribution and risk metrics such as Value at Risk (VaR), Tail Value at Risk (TVaR) and Expected Policyholder Deficit (EPD) for a variety of modeled variables. In order to construct many of the variables of interest, models such as Dynamo have cash flow overlays that enable the projection of financial statement accounting structures for the insurance entity being modeled. The logic of these types of models is enormously complex and even a single simulation is time consuming.

This paper begins by examining the effect that varying the number of simulations has on aggregate distributions of a series of seven right-tailed, correlated lognormal distributions. Not surprisingly, the values were found to be more dispersed for smaller sample sizes. What was surprising was finding that the values were also lower when using smaller sample sizes. Based on the simulations we performed, we conclude that a minimum of 100,000 trials is needed to produce stable aggregate results with sufficient observations in the extreme tails of the underlying distributions.

Similar conclusions are drawn for the modeled variables simulated with Dynamo 5. Sample sizes under 100,000 produce potentially misleading results for risk metrics associated with projected policyholders surplus. Based on the quantitative values produced by the HPC version of Dynamo 5 used in this article, we conclude that sample sizes in excess of 500,000 are warranted. The reason for the higher number of simulations in Dynamo 5 as compared to the seven variable example is the greater complexity of Dynamo, specifically the much larger number of random variables and the complexity of the correlated interactions between variables. As support for this, we observe that simulated metrics for Policyholders Surplus decreased by 2% to 3% when simulations were increased from 100,000 to 700,000. They decreased by 3% to 6% when simulations were increased from 10,000 to 700,000.

Keywords: Dynamic Risk Modeling, Solvency Analysis, Portfolio Aggregation, Loss Distributions,

Volume
Summer, Vol 2
Page
1-22
Year
2012
Categories
Business Areas
Reinsurance
Aggregate Excess/Stop Loss
Actuarial Applications and Methodologies
Dynamic Risk Modeling
Solvency Analysis
Financial and Statistical Methods
Loss Distributions
Publications
Casualty Actuarial Society E-Forum
Authors
William C Scheel
Gerald S Kirschner