Browse Research
Viewing 326 to 350 of 7690 results
2015
In this paper, we present a stochastic loss development approach that models all the core components of the claims process separately. The benefits of doing so are discussed, including the providing of more accurate results by increasing the data available to analyze. This also allows for finer segmentations, which is very helpful for pricing and profitability analysis.
2015
Current approaches to measuring uncertainty in an unpaid claim estimate often focus on parameter risk and process risk but do not account for model risk. This paper introduces simulation-based approaches to incorporating model error into an actuary’s estimate of uncertainty. The first approach, called Weighted Sampling, aims to incorporate model error into the uncertainty of a single prediction.
2015
Current approaches to measuring uncertainty in an unpaid claim estimate often focus on parameter risk and process risk but do not account for model risk. This paper introduces simulation-based approaches to incorporating model error into an actuary’s estimate of uncertainty. The first approach, called Weighted Sampling, aims to incorporate model error into the uncertainty of a single prediction.
2015
PEBELS is a method for estimating the expected loss cost for each loss layer of an individual property risk regardless of size. By providing maximum resolution in estimating layer loss costs, PEBELS facilitates increased accuracy and sophistication in many actuarial pricing applications such as ratemaking, predictive modeling, catastrophe modeling, and reinsurance pricing.
2015
When building statistical models to help estimate future results, actuaries need to be aware that not only is there uncertainty inherent in random events (process risk), there is also uncertainty inherent in using a finite sample to parameterize the models (parameter risk).
2015
A representative data set is used to provide an example comparing classical and Bayesian approaches to making inferences about the point in a sequence of random variables at which the underlying distribution may shift. Inferences about the underlying distributions themselves are also made. Most of the underlying ‘R’ code used in the analysis is shown in the appendix.
2015
The paper offers a simple framework for ranking the common reinsurance structures in practice with the theory of stochastic orders. The basic idea is to slice the space of reinsurance structures into groups by expected loss cost to facilitate the comparisons within the group and between groups.
2015
The emergence of Bayesian Markov Chain Monte-Carlo (MCMC) models has provided actuaries with an unprecedented flexibility in stochastic model development. Another recent development has been the posting of a database on the CAS website that consists of hundreds of loss development triangles with outcomes.
2015
2015
My congratulations to Mr. Leigh J. Halliwell on this paper that clearly presents the mathematics of excess losses with an interesting example. I agree with him that the mathematics of excess losses is beautiful and powerful. However, the mathematics of excess losses also contains several subtle points that are not mentioned in the paper. This discussion note complements the article by clarifying some of these points.
2015
In this paper we rigorously investigate the common shock, or contagion, model, for correlating insurance losses. In addition, we develop additional theory which describes how the common shock model can be incorporated within a larger set of distributions. We also address the issue of calibrating contagion models to empirical data. To this end, we propose several procedures for calibrating contagion models using real-world industry data.
2015
NCCI recently completed an extended review of its Experience Rating (ER) Plan. Although no major changes had been made for many years, testing indicated that ER Plan performance was still generally good. The primary cause of deteriorating performance was the use of a fixed split point between primary and excess losses while average claim severity increased dramatically.
2015
Consideration of parameter risk is particularly important for actuarial models of uncertainty. That is because—unlike process risk—parameter risk does not diversify when modeling a large volume of independent exposures. Without consideration of parameter risk, decision makers may be tempted to underwrite higher volumes as a result of the apparent high degree of predictability in the mean outcome.
2015
Interaction with actuarial models by both actuaries and non-actuaries is inevitable and requires careful study so that these models may better serve their purpose. Yet empirical and scientific investigations into how experts make their judgments are rarely reported in actuarial science literature.
2015
This paper presents and compares different risk classification models for the frequency and severity of claims employing regression models for location, scale and shape. The differences between these models are analyzed through the mean and the variance of the annual number of claims and the costs of claims of the insureds, who belong to different risk classes and interesting results about claiming behavior are obtained.
2015
Actuaries have used the so-called “square root rule” for the credibility for many years, even though the “F” value can take any value, and its assumption that the data receiving the complement of credibility is stable is often violated. Best estimate credibility requires fewer or no assumptions, but often requires certain key constants.
2015
A continuous version of Sherman’s discrete inverse power curve model for loss development is defined. This continuous version, apparently unlike its discrete counterpart, has simple formulas for cumulative development factors, including tail factors. The continuous version has the same tail convergence conditions and basic analytical properties as the discrete version.
2015
This paper discusses a methodology of calculating actuarial housing values, with the goal of helping mortgage lenders to gauge departures of housing market values from the fundamentals, and assisting policymakers with tools for implementing counter-cyclical policies.
2015
In this paper we explore a method to model the financial risks of holding portfolios of long-term temperature derivatives for any subset of the 30 North American cities whose derivatives are actively traded on the Chicago Mercantile Exchange (CME).