Browse Research

Viewing 176 to 200 of 7690 results
2016
Motivation The Hayne MLE family of models are quite elegant in their application, but like most models in order to address the needs of the practicing actuary the modeling framework needs to allow for the flexibility to deal with many different practical issues.
2016
Motivation The development of a wide variety of reserve variability models has been primarily driven by the need to quantify reserve uncertainty. This quantification can serve as the basis for satisfying a number of Solvency II requirements in Europe, can be used to enhance Own Risk Solvency Assessment (ORSA) reports, and is often used as an input to DFA or Dynamic Risk Models, to name but a few.
2016
Motivation.Supervised Learning - building predictive models based on past examples-is an important part of Machine Learning and contains a vast and ever increasing array of techniques that can be used by Actuaries alongside more traditional methods. Underlying many Supervised Learning techniques are a small number of important concepts which are also relevant to many areas of actuarial practice.
2016
Insurance claims fraud is one of the major concerns in the insurance industry. According to many estimates, excess payments due to fraudulent claims account for a large percentage of the total payments affecting all classes of insurance.
2016
Insurance policies often contain optional insurance coverages known as endorsements. Because these additional coverages are typically inexpensive relative to primary coverages and data can be sparse (coverages are optional), rating of endorsements is often done in an ad hoc manner after a primary analysis has been conducted.
2016
In this paper, we present a stochastic loss development approach that models all the core components of the claims process separately. The benefits of doing so are discussed, including the provision of more accurate results by increasing the data available to analyze. This also allows for finer segmentations, which is helpful for pricing and profitability analysis.
2016
Generalized linear models have been in use for over thirty years, and there is no shortage of textbooks and scholarly articles on their underlying theory and application in solving any number of useful problems. Actuaries have for many years used GLMs to classify risks, but it is only relatively recently that levels of interest and rates of adoption have increased to the point where it now seems as though they are near-ubiquitous.
2016
The purpose of the monograph is to provide access to generalized linear models for loss reserving but initially with strong emphasis on the chain ladder. The chain ladder is formulated in a GLM context, as is the statistical distribution of the loss reserve. This structure is then used to test the need for departure from the chain ladder model and to formulate any required model extensions.
2016
This paper provides an accessible account of potential pitfalls in the use of predictive models in property and casualty insurance. With a series of entertaining vignettes, it illustrates what can go wrong. The paper should leave the reader with a better appreciation of when predictive modeling is the tool of choice and when it needs to be used with caution. Keywords: Predictive modeling, GLM
2016
Given a Bayesian Markov Chain Monte Carlo (MCMC) stochastic loss reserve model for two separate lines of insurance, this paper describes how to fit a bivariate stochastic model that captures the dependencies between the two lines of insurance. A Bayesian MCMC model similar to the Changing Settlement Rate (CSR)model, as described in Meyers (2015), is initially fit to each line of insurance.
2016
The global insurance protection gap is one of the most pressing issues facing our society. It leads to a severe lack of societal resilience in many developing and emerging countries, where insurance today hardly plays any role when it comes to mitigating the impacts of natural disasters or pandemics, to name just two of the major societal risks. More often than not, all economic losses remain basically uninsured.
2016
In spite of its increasing relevance for businesses today, research on cyber risk is limited. Many papers have been devoted to the technological aspects, but relatively little research has been published in the business and economics literature. The existing articles emphasize the lack of data and the modelling challenges (e.g. Maillart and Sornette 2010; Biener, Eling and Wirfs, 2015), the complexity and dependent risk structure (e.g.
2016
It has been recently shown that numerical semiparametric bounds on the expected payoff of financial or actuarial instruments can be computed using semidefinite programming. However, this approach has practical limitations. Here we use column generation, a classical optimization technique, to address these limitations.
2016
The purpose of the present paper has been to test whether loss reserving models that rely on claim count data can produce better forecasts than the chain ladder model (which does not rely on counts)—better in the sense of being subject to a lesser prediction error. The question at issue has been tested empirically by reference to the Meyers-Shi data set. Conclusions are drawn on the basis of the emerging numerical evidence.
2016
Actuaries have always had the impression that the chain-ladder reserving method applied to real data has some kind of “upward” bias. This bias will be explained by the newly reported claims (true IBNR) and taken into account with an additive part in the age-to-age development. The multiplicative part in the development is understood to be restricted to the changes in the already reported claims (IBNER, “incurred but not enough reserved”).
2016
To predict one variable, called the response, given another variable, called the predictor, nonparametric regression solves this problem without any assumption about the relationship between these two random variables. Traditional data, used in nonparametric regression, is a sample from the two variables; that is, it is a matrix with two complete columns.
2016
Moment-based approximations have been extensively analyzed over the years (see, e.g., Osogami and Harchol-Balter 2006 and references therein). A number of specific phase-type (and non phase-type) distributions have been considered to tackle the moment-matching problem (see, for instance, Johnson and Taaffe 1989).
2016
This paper introduces sequential statistical methods in actuarial science. As an example of sequential decision making that is based on the data accrued in real time, it focuses on sequential testing for full credibility. Classical statistical tools are used to determine the stopping time and the terminal decision that controls the overall error rate and power of the procedure.