Search Presentations

The presentation materials are offered in connection with CAS professional education offerings. © 2022 Casualty Actuarial Society. All Rights Reserved. The presentation materials may contain copyrighted content the use of which has not been specifically authorized by the copyright owner. You are permitted to view and print the materials for personal/professional noncommercial research purposes. Except for the foregoing, you agree not to reproduce, distribute, modify, create derivative works, or commercially exploit the presentation materials without prior written permission from CAS. Please direct any copyright permission inquiries regarding use of the presentation materials to acs@casact.org.

Viewing 3126 to 3150 of 6735 results
STAY TUNED! If you are anticipating additional search filters by attribute and level to align with the CAS Capability Model, it is coming later this Summer. As the CAS begins to code recorded sessions by specific attributes and levels (starting with the 2023 Annual Meeting), these will be tagged in the CAS database of presentations going forward and should be searchable.

But you may use the Capability Model now to help you identify topics. For example, if you want to move up one level under the content area “Functional Expertise,” you may search topics in the particular functional area to expand your knowledge.

Recorded content is searchable by Capability Model attribute and level in the CAS Online Library.

PEBELS: Property Exposure Based Excess Loss Smoothing

PEBELS, or policy exposure based excess loss smoothing, was born of the need to develop estimates of high layer expected loss cost for extremely small, non-credible segments of a primary property book of business. The existing actuarial literature provides methods for estimating high layer excess loss cost for large property portfolios in aggregate, but is silent on how to produce similar provisions for smaller subsets of such a book. PEBELS was developed to be just such a method. PEBELS generalizes existing pricing theory from published property per risk reinsurance exposure rating methods and leverages increasingly available exposure data to produce a method which allows the practitioner to develop accurate high layer expected loss cost estimates down to the policy level. Once the theoretical framework required to implement this method in practice has been formulated, this paper continues to explore applications of PEBELS outside of primary insurance pricing such as adjusting modeled catastrophe average annual losses (AALs) for bias from implicit linearity assumptions, improving the predictiveness of property predictive models including generalized linear models (GLMs), and refining the published property per risk reinsurance exposure rating method.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Mike Covert
Panelists: Marquis Moehring

Part V: Blending Historical Data and Models

Severe thunderstorms have affected many parts of the country recently, sometimes causing extensive tornado and hail damage. This session will discuss the latest science behind tornado/hail/thunderstorm modeling. It will consider how historical experience and catastrophe models can complement each other, specifically giving an example of blending historical data with model results.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Mike Covert
Panelists: David Lalonde

Part III: Hurricane Modeling

As the state of scientific research around the effect of climate change on hurricane behavior continues to evolve, the insurance industry has an ever increasing need to be informed. Probabilistic hurricane models have several ways of representing the uncertainty in hurricane frequency, including the use of multiple forecasts. This session will discuss how RMS uses the Medium Term Rate Forecast to provide additional insight to the ever changing landscape of hurricane risk
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Mike Covert
Panelists: Matthew Nielsen

Part II: Overview of Traditional Methods for Incorporating Weather Activity in Rates

Property insurance results can be volatile due to the catastrophic nature of the risk involved. The hurricane peril is typically a major risk for many property insurance companies; however, in recent years, events like wildfires and tornadoes have been more prevalent than in the past, threatening to wipe out any potential profit and increase the chance for insolvency. This session will describe how insurance companies account for these previously non-modeled catastrophes within their pricing methodology. In particular, methodologies that organize data into catastrophe versus non-catastrophe events or weather versus non-weather events and the associated advantages and disadvantages of each will be discussed.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Mike Covert
Panelists: Jamie Mills

Niche Identification

During this session participants will learn key elements of niche identification. Facilitators will explore how developing a new product is often about identifying an underserved niche or finding ways to attract risks more likely to be profitable.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Mike Covert
Panelists: Robin Harbage

New Product Approval Process

New products are important to an orgainzation's continued growth and marketplace relevance. What processes and governance should companies consider in terms of approving new products? What role (if any) do senior management and the board have in shaping and bringing forth new products? Speed to market is not always a predictor of success in managing products. This session will provide a perspective on how to achieve speed to profit through effective project selection, prioritization, and execution. We will also discuss the factors that contribute to success in launching new products and leading practices based on research and practical experience.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Mike Covert
Panelists: Kelly Cusick

Model Validation - Seconds Anyone?

As predictive modeling takes off in the insurance community, companies are facing the growing realization that building and implementing the second iteration of a model can be just as challenging as the first. The first version of this talk, “Know Your Audience” (given at the 2011 and 2012 RPM Seminars by this session’s panel) concerned model validation from three perspectives (the modeler, senior management and the regulator) and emphasized the first iteration of a model and the basics. “Seconds Anyone?” will focus on the second model iteration and address the concerns of the same three constituencies. For modelers, the panel will explore the added challenges in validating the model and what techniques can be used to compare it to the previous version. For senior management, they will discuss how one should view disruption in the context of incremental changes to the rating plan. From a regulatory perspective, the panel will address the specific areas that should be focused on when evaluating an update to an existing model. Attending the “Know Your Audience” session is not a prerequisite for this session.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Mike Covert
Panelists: Kevin Mahoney, Larry Haefner, Richard Piazza

Model Blending

Given the events of the past several years, the development of “custom” views of catastrophe risk through adjusting, or blending, output from multiple catastrophe models has become increasingly common among insurance companies. The benefits of this approach include: modeled results that better reflect a company’s actual claims experience and reduced model risk from no longer relying on a single vendor model. This session will briefly discuss vendor catastrophe model validation techniques and then describe several methods, from simple to complex, for adjusting catastrophe model output to develop a “custom” view of catastrophe risk.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Mike Covert
Panelists: Adam Troyer

Model Blending

Given the events of the past several years, the development of “custom” views of catastrophe risk through adjusting, or blending output from multiple catastrophe models has become increasingly common among insurance companies. The benefits of this approach include: modeled results that better reflect a company’s actual claims experience and reduced model risk from no longer relying on a single vendor model. This session will briefly discuss vendor catastrophe model validation techniques and then describe several methods, from simple to complex, for adjusting catastrophe model output to develop a “custom” view of catastrophe risk.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Mike Covert
Panelists: Adam Troyer

Mileage Based Rating in the Current Auto Insurance Environment

Verification of annual mileage for auto insurance has been an issue for years, and has resulted in companies just living with what they know is bad data or deciding to stop using annual mileage altogether. The use of telematics will obviously address the issue, but not completely and not quickly. Only a very small percentage of auto insurance customers are currently using telematics devices, and it will take years for the number of users to increase significantly. Also, because the use of telematics is voluntary, it will not provide a source of verified mileage for those that opt not to use it. So for the foreseeable future, companies will still have to contend with this issue. This session will discuss the use of vehicle history records to validate annual mileage. There are companies that collect vehicle history information from numerous sources, and one of the types of data collected from these vehicle history records is annual mileage. This session will discuss: How this information is used to verify reported annual mileage. How verified mileage information is used to develop models to predict average annual mileage. Results of analyses that demonstrate the correlation of calculated and predicted average miles to insurance losses.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Mike Covert
Panelists: Roosevelt Mosley

Marketing

If a tree falls in the forest but no one hears it, does it make a sound? Likewise, if a product is designed but doesn't get to market, has a product been developed? Participants will discuss marketing issues and ways to measure marketing effectiveness.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Mike Covert
Panelists: Kelly McKeethan

Loss Cost Modeling vs Frequency and Severity Modeling

In recent years, loss cost modeling using the Tweedie distribution has been gaining popularity in GLM-based predictive modeling practice, joining frequency and severity modeling. This has left us with two widely used GLM design approaches, each with their own strengths and weaknesses. The frequency-severity modeling approach prescribes modeling claim frequency (claim count over exposure) and claim severity (incurred loss over claim count) separately, and then combining those results to create loss cost estimates. Enhancements of the basic frequency-severity approach include the creation of modeled loss cost datasets to facilitate offsetting and special treatment or modeling of high-severity losses. The loss cost approach uses loss cost data (incurred loss over exposure) directly as the target variable in the model. In this session, we will discuss the pros and cons of the two model design approaches during a class plan study or underwriting tier development. The discussion will cover both business and statistical considerations for personal and commercial lines. We will use multiple data sources to illustrate comparisons and support our findings.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Mike Covert
Panelists: Jun Yan

Location Level Pricing

Although the use of catastrophe models has been widely adopted in the industry, the literature contains relatively little on how to use catastrophe model output to construct a rating plan. In practice, the pricing of this peril is often based on Average Annual Loss by territory, with territorial definitions that do not adequately differentiate risk. Class factors may be determined without consideration of correlation among them or between class factors and territory. This session will consider a more sophisticated method by describing in detail an approach to constructing a pricing structure by performing a multivariate analysis using cat model output combined with Geographic Information Systems (GIS) data. It will include discussion of how to construct territories appropriate for the hurricane peril, how to construct base rates and class factors, and the limitations of this approach.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Mike Covert
Panelists: Matthew Chamberlain

Intelligent Use of Competitive Analysis

Competitive analysis is one of the key elements in measuring the performance of a rate algorithm. This requires the capture and analysis of competitive data. Over the years, much work has been performed in capturing the data; however, there is a significant lack of sophistication in analyzing it. This session will begin by discussing the sources and challenges of acquiring good competitive information. Then, the focus will be on how to analyze the data to make informed decisions. Analysis strategies will run the gamut from traditional mining of the data to more sophisticated analysis and then to incorporation of the information into demand curves.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Mike Covert
Panelists: Serhat Guven, Kelleen Arquette

Integrating Text-Data into Predictive Analytics: A Demonstration Using Motor-Vehicle Crash Descriptions to Identify Drivers under the Influence of Legal or Illegal Drugs

A 2007 survey by the National Highway Traffic Safety Administration found that 16.3 percent of nighttime drivers tested positive for legal or illegal drugs. The study was not part of an enforcement initiative but a random, roadside study. Although there are widely-accepted tests to check whether a driver is under the influence of alcohol, no comparable tests exist for whether a driver may be under the influence of a legal or illegal drug. A driver in a motor vehicle crash may be reluctant to admit that drug use impaired their driving. Furthermore, some drivers may have considered the consumption or the amount that was consumed not to be noteworthy. Finally, conventional reporting forms may not have the necessary options to capture the information – which may especially be the case with pharmaceutical medications. Claim adjuster notes provide a data source to identify drug use not captured at the time of the accident or on conventional reporting forms. Furthermore, there are a considerably large number of medical conditions and medications to be captured and conventional structured-data forms may not be capable of capturing the many types and conditions associated with drug use. The National Motor Vehicle Crash Causation Survey database provides crash descriptions for a large sample of motor vehicle accidents. The crash descriptions are an unformatted text data that can extend to several hundred words. The crash descriptions are very similar in form and substance to property-casualty claim adjuster notes. This presentation will demonstrate how text data can be mined and integrated with structured data to perform predictive analytics. The value of the text data is that the explanatory power of predictive analytics is improved over the case where predictive analytics are limited to structured data.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Mike Covert
Panelists: Philip Borba

Indemnity Benefit Duration and Obesity

There is mounting evidence of obesity contributing to the cost of workers compensation. Longitudinal studies point to substantially higher odds of injury for workers in the highest obesity category. Further, a 2011 Gallup survey found that obese employees account for a disproportionately high number of missed workdays, causing a significant loss in economic output. And an NCCI study of worker compensation claims established that where claimants are assigned a comorbidity code indicating obesity, the medical costs of the claim are a multiple of what is observed otherwise. The speaker at today’s session will describe how a matched-pairs research design was used to determine that, based on Temporary Total and Permanent Total indemnity benefit payments, the duration of obese claimants is more than five times the duration of non-obese claimants, after controlling for primary ICD-9 code, injury year, U.S. state, industry, gender, and age. When Permanent Partial benefits are counted toward indemnity benefit duration as well, this multiple climbs to more than six.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Mike Covert
Panelists: Chris Laws

Incorporating Reinsurance Considerations into Product Design Using an Augmented Price Optimization Framework

This session will discuss how reinsurance profitability analysis conducted in a price optimization setting can inform product design decisions. Indeed, as alternative reinsurance arrangements impact the net risk profile of portfolios, they eventually affect underwriting rules, premium income, policy retention, quote conversion patterns, and ultimately the bottom-line net of reinsurance profitability. However, usual two-dimensional price optimization analyses fail to capture the impact of reinsurance on portfolio metrics by only focusing on portfolio-level trade-offs between top line volume and gross of reinsurance profitability. By adding a third dimension representing reinsurance spend, such analyses can be expanded and lead to better informed product design decisions. This session will present a conceptual framework and discuss practical challenges for implementation, from data gathering to risk modeling and building adequate management information systems and dashboards.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Mike Covert
Panelists: Jason Harger, Yves Colomb

Implementing Value Based Pricing - What and How People Buy

Ronald Baker will build upon his well received sessions on breaking commodity-based thinking from previous RPM Seminars. Developing an understanding of the nine factors of price sensitivity not only will assist actuaries in creating better models but provide a platform for management and product managers to identify new niches, better place insurance products, and begin to price customers and not policies.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Ron Kozlowski
Panelists: Ronald Baker

Homeowners Ratemaking by Peril - Data Issues

In homeowners insurance, consumers are charged a single price for the coverage; however, this coverage involves multiple perils. This session discusses the data issues involved in determining a price for multiple perils. Some of the issues that will be explored include preparing the data for modeling, dealing with loss data - including text mining approaches, dealing with missing values, dealing with geographic data, role of Public Protection Class, segmentation opportunities, use of principal components, improving model robustness and methods to reduce the number of potential predictors.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Ron Kozlowski
Panelists: Michael Nielsen

Homeowners Ratemaking By Peril - Application and Implementation

Our second session on by Peril Ratemaking explores the application and implementation of the multi-peril rates. The topics in this session will include why we rate by peril, grouping of perils, variable selection, model validation, development of factors, territorial ratemaking, pricing cat perils and issues in creating manuals, making filings and handling state exceptions.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Chuck Boucek
Panelists: David Cummings, Anton Zalesky

Health Care Reform

In 2010, the Healthcare Reform Act was signed into law. This session will focus on the trends emerging from the law and the impact it is having on the medical malpractice marketplace. Panelists will discuss potential unintended consequences that may affect other lines of business. The session will cover the inflationary and demographic trends that are emerging towards employed physician status and the like.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Chuck Boucek
Panelists: Anne Petrides, Laura Cali

GLM III - The Matrix Reloaded

This session will consider new techniques and refinements to the basic GLM which can add material value to the modeling process. It will specifically consider amendments which address some of the purported failings of GLMs in comparison to emerging methods such as machine learning techniques. The session will include a discussion of: An innovative approach to detecting subtle and higher dimensional interactions in an efficient way, potentially eliminating theneed to consider alternative, harder-to-implement model forms such as nonlinear models. The role of such automated methods in comparison with more manual construction of composite explanatory variables. Ways to mitigate the risk of over-parameterization through the use of modifications which incorporate elements of credibility within the GLM framework. Simple practical modeling steps that can be used to remove distortions created by combining models across claim types. Innovative ways of modeling bodily injury claims and other miscellaneous refinements.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Chuck Boucek
Panelists: Duncan Anderson

GLM II

GLM I provided the case for using GLMs and some basic GLM theory. GLM II will be a practical session outlining basic modeling strategy. The discussion will cover topics such as overall modeling strategy, selecting an appropriate error structure and link function, simplifying the GLM (i.e., excluding variables, grouping levels, and fitting curves), complicating the GLM (i.e., adding interactions), and validating the final model. The session will discuss diagnostics that help test the selections made.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Panelists: Ernesto Schirmacher, Lenard Llaguno

GLM I

Do terms such as “link function,” “exponential family,” and “deviance” leave you puzzled? If so, this session will clarify those terms and demystify generalized linear models (GLMs). The session will provide a basic introduction to linear models and GLMs. Targeted at those who have modest experience with statistics or modeling, the session will start with a brief review of traditional linear models, particularly regression, which has been taught and widely applied for decades. Session leaders will explain how GLMs naturally arise as some of the restrictive assumptions of linear regression are relaxed. GLMs can model a wide range of phenomena, including frequencies and severities as well as the probability that a claim is fraudulent or abusive, to name just a few. The session will emphasize intuition and insight in addition to mathematical calculations. Illustrations will be presented using actual insurance data.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Panelists: Ernesto Schirmacher

Extending the Asset Share Model: Recognizing the Value of Options in P&C Insurance Rates

This session will treat a refinement of the well-known asset share model for ratemaking. The new method for calculating premiums and premium relativities accounts for risk classification transition probabilities. The relationship between risk class transition and options on insurance coverage will also be discussed. Some simple examples will be worked which will demonstrate how risk class transition can cause problems for the traditional asset share model, which are remedied by the extended asset share model. The new method will also be used to determine the price for insurance policies with the popular “accident forgiveness” feature.
Source: 2013 Ratemaking and Product Management Seminar
Type: concurrent
Moderators: Chuck Boucek
Panelists: Serhat Guven