Browse Research

Viewing 1 to 25 of 184 results
2023
This paper proposes maximum likelihood inference for predictive analytics models of misrepresentation in insurance underwriting. The models allow for multiple risk factors in both the ratemaking models and latent models on misrepresentation. Such latent variable approach enables learning of misrepresentation risk at the policy level without requiring labelled fraud data.
2023
Multivariate loss distributions have been a staple of actuarial work. This paper aims to put forth a versatile class of multivariate mixtures of gamma distributions tailored for actuarial applications.
2023
Model-based decisions are highly sensitive to model risk that arises from the inadequacy of the adopted model. This paper reviews the existing literature on model risk assessment and shows how to use the theoretical results to develop a corresponding best practice. Specifically, we develop tools to assess the contribution to model risk of each of the assumptions that underpin the adopted model.
2023
We show a catastrophe bond’s economically meaningful cash flows are the same as a fully continuous term life insurance policy. As a result we obtain a simple catastrophe bond pricing formula expressed in standard actuarial notation. Catastrophe bonds covering seasonal perils have a periodic event intensity and so their issue price varies with effective date during the year. We quantify this variation.
2023
Agricultural production is highly vulnerable to both short-term extreme weather events and long-term climate variability and change. These impacts propagate further and result in socioeconomic changes affecting farmers, insurers, and other stakeholders across agricultural supply chains.
2023
The American Academy of Actuaries defines credibility as a measure of the predictive value that the actuary attaches to a particular set of data in a given application.
2022
In building predictive models for actuarial losses, it is typical to trend the target loss variable with an actuarially-derived trend rate. This paper shows that such practice creates elusive predictive biases that are costly to an insurance book of business, and proposes more effective ways for contemplating trends in actuarial predictive models.
2022
It is often difficult to know which models to use when setting rates for auto insurance. We develop a market-based model selection procedure which incorporates the goals of the business. Additionally, it is easier to interpret the results and better understand the models and data.
2022
The frequency and impact of cybersecurity incidents increases every year, with data breaches, often caused by malicious attackers, among the most costly, damaging, and pervasive.
2022
The paper develops a policy-level unreported claim frequency distribution for use in individual claim reserving models. Recently, there has been increased interest in using individual claim detail to estimate reserves and to understand variability around reserve estimates. The method we describe can aid in the estimation/simulation of pure incurred but not reported (IBNR) from individual claim and policy data.
2022
We develop Gaussian process (GP) models for incremental loss ratios in loss development triangles. Our approach brings a machine learning, spatial-based perspective to stochastic loss modeling. GP regression offers a nonparametric probabilistic distribution regarding future losses, capturing uncertainty quantification across three distinct layers—model risk, correlation risk, and extrinsic uncertainty due to randomness in observed losses.
2021
In ratemaking, calculation of a pure premium has traditionally been based on modeling frequency and severity in an aggregated claims model. For simplicity, it has been a standard practice to assume the independence of loss frequency and loss severity. In recent years, there has been sporadic interest in the actuarial literature exploring models that depart from this independence.
2021
Maximum likelihood estimation has been the workhorse of statistics for decades, but alternative methods, going under the name “regularization,” are proving to have lower predictive variance. Regularization shrinks fitted values toward the overall mean, much like credibility does. There is good software available for regularization, and in particular, packages for Bayesian regularization make it easy to fit more complex models.
2021
Analysis of truncated and censored data is a familiar part of actuarial practice, and so far the product-limit methodology, with Kaplan-Meier estimator being its vanguard, has been the main statistical tool. At the same time, for the case of directly observed data, the sample mean methodology yields both efficient estimation and dramatically simpler statistical inference.
2021
The Bornhuetter-Ferguson method is among the more popular methods of projecting non-life paid or incurred triangles. For this method, Thomas Mack developed a stochastic model allowing the estimation of the prediction error resulting from such projections. Mack’s stochastic model involves a parametrization of the Bornhuetter-Ferguson method based on incremental triangles of incurred or paid.
2021
Capital allocation is an essential task for risk pricing and performance measurement of insurance business lines. This paper provides a survey of existing capital allocation methods, including common approaches based on the gradients of risk measures and economic allocation arising from counterparty risk aversion. We implement all methods in two example settings: binomial losses and loss realizations from a catastrophe reinsurer.
2021
In this paper, we propose a generalization of the individual loss reserving model introduced by Pigeon et al. (2013) considering a discrete time framework for claims development. We use a copula to model the potential dependence within the development structure of a claim, which allows a wide variety of marginal distributions. We also add a specific component to consider claims closed without payment.
2021
The concept of risk distribution, or aggregating risk to reduce the potential volatility of loss results, is a prerequisite for an insurance transaction. But how much risk distribution is enough for a transaction to qualify as insurance? This paper looks at different methods that can be used to answer that question and ascertain whether or not risk distribution has been achieved from an actuarial point of view.
2021
This paper demonstrates an approach to apply the lasso variable shrinkage and selection method to loss models arising in actuarial science. Specifically, the group lasso penalty is applied to the GB2 distribution, which is a popular distribution used often in actuarial research nowadays.
2021
The classical credibility theory circumvents the challenge of finding the bona fide Bayesian estimate (with respect to the square loss) by restricting attention to the class of linear estimators of data. See, for example, Bühlmann and Gisler (2005) and Klugman et al. (2008) for a detailed treatment.