Search Presentations

The presentation materials are offered in connection with CAS professional education offerings. © 2022 Casualty Actuarial Society. All Rights Reserved. The presentation materials may contain copyrighted content the use of which has not been specifically authorized by the copyright owner. You are permitted to view and print the materials for personal/professional noncommercial research purposes. Except for the foregoing, you agree not to reproduce, distribute, modify, create derivative works, or commercially exploit the presentation materials without prior written permission from CAS. Please direct any copyright permission inquiries regarding use of the presentation materials to acs@casact.org.

Viewing 4976 to 5000 of 6735 results
STAY TUNED! If you are anticipating additional search filters by attribute and level to align with the CAS Capability Model, it is coming later this Summer. As the CAS begins to code recorded sessions by specific attributes and levels (starting with the 2023 Annual Meeting), these will be tagged in the CAS database of presentations going forward and should be searchable.

But you may use the Capability Model now to help you identify topics. For example, if you want to move up one level under the content area “Functional Expertise,” you may search topics in the particular functional area to expand your knowledge.

Recorded content is searchable by Capability Model attribute and level in the CAS Online Library.

Data and Information Quality in a Rapidly Changing World

The focus of insurance information-the type of data, how it's collected, how it's used-has been changing dramatically over the past few years, but has data quality and data quality methods and tools kept pace? This session will explore the changing focus of insurance information and the expanding reach of data quality (to predictive modeling, the reuse of data, data standards, data transparency, and a tool to monitor solvency, etc.). And the impact of these changes on companies and the industry.
Source: 2007 Annual Meeting
Type: concurrent
Moderators: Kevin Rooney
Panelists: Gary Knoble, Tracy Spadola, Thomas Nowak
Keywords: Data and Information Quality

Claims Process Improvement and Automation

Claims organizations are facing growing challenges from the increasing demands of complex claims handling and the need to control rising loss costs. Some recent trends in the claims handling arena and some of the new challenges to these organizations will be discussed by the panelists. This session will address some of the operational trends and demands currently being faced by claims organizations, and potential issues to be faced in the future. Some of the key measurement indicators used to evaluate the performance of these organizations, relative to their peers, will also be reviewed. The panelists will discuss trends and changes from the perspective of the company’s claims operations, as well as the impact on actuaries (as key end-users) of the information produced. Additionally, an update on the evolution and current state of the claims automation market will be provided.
Source: 2007 Annual Meeting
Type: concurrent
Moderators: Kevin Rooney
Panelists: Claire Louis, Jeff Bamundo, Andrew Sawyer
Keywords: Claims Process

Application of Actuarial Skill set in Financial Markets

Actuaries are continually looking beyond traditional roles of pricing and reserving, the "meat and potatoes" of actuarial work. Perhaps they're looking for something off the beaten track-fish anyone? This panel will explore how some actuaries have used their traditional training and have applied them within securitization and other financial areas.
Source: 2007 Annual Meeting
Type: concurrent
Moderators: Kevin Rooney
Panelists: Philip Kane, Scott Swanay, Lawrence Marcus
Keywords: Actuarial Skill set

Adequacy of P/C Insurance Loss Reserves-has the Pendulum Swung too Far?

The soft market of the late 1990s not only brought inadequate pricing but a wave of reserve increases. Recognition of substantial adverse development of booked loss and loss and loss adjustment expense reserves was unfortunately commonplace between 2000 and 2004. Plus, property/casualty insurers generally continued to strengthen loss reserves associated with asbestos, pollution, and other health hazards. The hard market years swung pricing and reserving in the other direction. Clearly, loss reserves are much stronger than a few years ago. But, have industry-wide loss reserve levels in the U.S. actually achieved a conservative point that would be described as "over-reserved?" The Panelists will discuss perceived loss reserve levels while addressing some of the following issues: * Can the punitive effect of announcing adverse reserves in prior years have led to over reactions? * Does the inherent uncertainty of the loss reserving process-especially for long-tailed lines of business-practically lead senior actuaries and management to book conservative reserves? * How much does the statement of a reserve adequacy level depend on perceived price adequacy of casualty insurance for 2004 through 2007? And, how quickly could the emergence of adverse cost trends-sharply increasing medical inflation for instance-change current reserve perceptions? * What are the implications for shareholders and other stakeholders? Are conservatively stated reserves universally "good?" The session will include a question and answer period from the audience.
Source: 2007 Annual Meeting
Type: concurrent
Moderators: Thomas Daley
Panelists: John Kollar, William Wilt, Stephan Christiansen
Keywords: P/C Insurance Loss Reserves

Federal Insurance Regulation

Some believe that the state system of insurance regulation is working quite well. When it comes to consumer protection, most would agree that state regulators are more effective than federal agencies in resolving individual consumer complaints. The insurance industry in general prefers less regulation to more, and, there is certainly reason to question whether a federal bureaucracy can modernize insurance regulation without adding additional bureaucracy and the associated frictional costs. There are many reasons to consider federal regulation of insurance, primarily associated with efficiency. Some insurers, particularly life insurance companies and reinsurers, suggest that consumers (and insurers) could benefit from a substantial savings in regulatory-related expenses. There is general appeal for a single license and single approval of insurance products that would allow an approved product to be sold everywhere in the U.S., as well as a single regulator responsible for financial solvency and market conduct. There are on-going discussions in Congress about a possible repeal of the 62-year-old McCarran-Ferguson Act. Additionally, the legislators are considering the National Insurance Act of 2007 that would create the Optional Federal Charter for a single national framework for reinsurance regulation. The NAIC already provides quasi-regulation at the federal level, bringing a large degree of cooperation and uniformity to the state-based insurance regulatory system. But the NAIC has its limits, with no authority to impose its rules on the states. Would the insurance industry and consumers benefit from a federal insurance commissioner who does essentially what the NAIC has been doing, but with the authority to back it up? How about a less invasive federal law that allows insurers, particularly those with significant interstate business, the option of being governed by federal regulations? Our panel will discuss the potential pros and cons associated with federal insurance regulation, and will ask attendees to voice their own concerns.
Source: 2007 Annual Meeting
Type: general
Moderators: Thomas Daley
Panelists: Michael McRaith, Deirdre Manna, Neil Alldredge, J. McKechnie

Behavorial Economics and the Insurance Market

Behavioral economics is a combination of psychology and economics that studies the impact of human limitations on market behavior. In order to observe this behavior in the capital markets, scientists employ what is known as market microstructure analysis. Today's computing power and detailed investment tracking databases have allowed us to "slow down the movie" and actually look at the moving atomic parts that result in trades. Behavioral economics comes into play because trades occur between two counterparties, be they institutions or individuals. Presumably these trades occur because both parties believe the trade to be beneficial. "Beneficial" is a purely subjective assessment based on individual interpretation of past, present, and future information, and risk preferences and attitudes. Trading also occurs in an auction framework and involves negotiations, patience, and alternatives. The structure of the auction itself can influence the outcome. This session will provide an introduction to capital market microstructure analysis, and apply a similar analysis to the insurance markets.
Source: 2007 Annual Meeting
Type: concurrent
Moderators: Thomas Daley
Panelists: Donald Mango, Richard Goldfarb
Keywords: Behavorial Economics

The Revised U.S. Qualification Standards

This presentation is offered to help actuaries prepare for the implementation of the new version of the U.S. Qualification Standards, which will take effect January 1, 2008, and will affect most practicing actuaries in the United States. Changes that have been made in the Qualification Standards and details on how to meet the revised requirements are among the topics that will be addressed. Ample time will be allotted for questions from the audience.
Source: 2007 Annual Meeting
Type: general
Panelists: Mary Miller

What to do When You can't use Credit

Credit based insurance scores have become an important part of many insurer’s rating and underwriting plans, and can result in significant rate impacts. However, given the regulatory and public scrutiny of the use of credit information, there is a concern of what would happen if insurers could no longer use credit. This is actually the case in several jurisdictions, and insurers are coping with this reality. This session will discuss some of the ways insurers have dealt with not being able to use insurance score information, and also offer additional suggestions of how insurers might address this situation.
Source: 2007 Fall SIS - Predictive Modeling
Type: concurrent
Panelists: Roosevelt Mosley, John Wilson

Visualizing Predictive Modeling Results

The proper graphical presentation of a predictive model can be a critical diagnostic tool in the analysis of model results. Graphical presentation is also a key communication tool to individuals in an organization who do not have a detailed background in predictive modeling. In this session, the presenters will draw on their experience from a variety of predictive modeling projects in order to demonstrate a number of graphical presentation methodologies that they have found critical in proper diagnosis and presentation of model results. This will include techniques to understand key aspects of the data, identify and analyze predictor variables, and summarize key model results to senior management. Selected elements of the presentation will be in case study format.
Source: 2007 Fall SIS - Predictive Modeling
Type: concurrent
Panelists: Charles Boucek, Louis Mak

Use of Scoring in Marketing

Customer life cycles offer many opportunities for improving profitability - prospecting, lead generation, risk segmentation and selection, acquisition, retention, cross sell, upsell, attrition, and win back. But it is not just the contractual relationship; there is also price elasticity for brand value to consider over multiple policy periods for the "life time," We will discuss how to define a customer, and then how predictive models and operational implementation can improve your company's profitability, both immediately and in the future. In addition, the marketing department of an insurance organization seeks alignment with their underwriting department in terms of what attributes are associated with a "good" risk when pursuing new business. External predictors used in a company's underwriting/pricing models can be leveraged to achieve better alignment. Potential new accounts can be scored and ranked based on the likelihood of passing through a company's underwriting filter. The accounts can also be compiled reflecting the relevant distribution channel. The distribution channel factors considered include proximity, type of account, and referral information related to the targeted account.
Source: 2007 Fall SIS - Predictive Modeling
Type: concurrent
Panelists: Martin Ellingsworth, Gary Ciardiello

Tools for Model Development, Performance Metries, and Business Intelligence

In Who Say's Elephants Can't Dance?, author Louis Gerstner Jr., the former CEO for IBM, recounts how IBM improved its decision-making capabilities, reacted more quickly to business drivers, and ultimately brought about one of the greatest corporate turnarounds in history. The insurance industry can learn from IBM by shifting from reactive to proactive management and using advanced analytics to address business drivers that affect an insurer's bottom line performance. These techniques can help a company make critical decisions sooner and faster, better align strategic goals based on quantitative metrics, and improve the organizational flow of information, and thus better meet market needs, customer service, and competitive pressures. Predictive modeling is a major investment that requires on-going monitoring, measurement of business impact, and adjustments to maximize ROI. This session offers two different perspectives on the crucial issue of deploying a predictive model and addresses performance metrics and proximity modeling.
Source: 2007 Fall SIS - Predictive Modeling
Type: concurrent
Moderators: Thomas Daley
Panelists: Charles Boucek, Mo Masud, Lisa Wester

Terroritorial Analysis: Putting your Company on the map

Geographic location is one of the most widespread and established rating factors in the insurer's rating algorithm, yet it is one of the more challenging factors to define. Historically, geographical risk classifications such as fire protection classes, rating territories, and zones have been defined based on physical surveys, engineering studies, and data analysis. Only the large companies had been able to deviate from the standard industry classifications. Traditionally, rate factors for these risk classifications were determined using one-way analysis techniques, even though geographical risk tends to be highly correlated with other risk factors. Because of this correlation, it is imperative that locational rating factors be analyzed within the context of a multivariate framework. As location tends to be made up of a large number of dimensions, many of which have sparse data, special multivariate techniques are required. This panel will discuss some techniques for determining definitions and rating factors based on the location of the risk and historical data. The discussion will include pros and cons of various options and diagnostics that can be used to validate the results.
Source: 2007 Fall SIS - Predictive Modeling
Type: concurrent
Panelists: Klayton Southwood, Serhat Guven, Christopher Carlson

Project Management for Predictive Models

The use of predictive modeling tools continues to expand within the property/casualty industry. The need to manage both the development and implementation of these complex tools is critical to accomplishing the goals for the business unit. This session will discuss the aspects of managing a complex project and the issues to consider for successful implementation.
Source: 2007 Fall SIS - Predictive Modeling
Type: concurrent
Panelists: Beth Fitzgerald, Jonathan White

Pricing Optimization II

Are you doing a good job of aligning your business strategies with your predictive modeling initiatives and are you fully leveraging the power of predictive modeling across multiple business segments? This session will introduce enterprise-wide predictive modeling and discuss ways insurance companies can implement an analytical framework that enables predictive modeling-based business decisions across three major operational areas—pricing, marketing/retention, and underwriting. Enterprise-wide predictive modeling also incorporates a 360° feedback approach to implementing, measuring and reassessing business decisions, resulting in a scaleable approach that can meet an insurance company’s short-term and long-term business objectives. The session will also include discussions on how insurance companies can enhance decision making across these three operational areas and react quickly to market forces by implementing an enterprise-wide predictive modeling framework. In today’s climate, pricing strategy and competitive forces are widely recognized as the most important challenges facing personal lines insurers that are seeking to grow or defend their market shares without sacrificing profitability. Price optimization approaches, which balance the trade-off between profit and sales volume based on customer behavior and the competitive environment, have been successfully implemented in other service-led industries and are now being used by the financial services industry as a means of long-term value creation. This session will look at some of the practical steps that insurance companies can take to develop market and customer behavior models, and to build this knowledge into their pricing strategies.
Source: 2007 Fall SIS - Predictive Modeling
Type: concurrent
Panelists: Mo Masud, Jun Yan, Mark Airey

Predictive Modeling Lifecycle

Modelers frequently focus on the modeling step, but there are many other important steps involved in a predictive modeling project. This session will discuss a framework for the successful development and implementation of a predictive model. In this regard, we will describe each of the steps necessary for the completion of model: * Business problem understanding * Initial data understanding * Data preparation for modeling * Modeling * Evaluation of the model * Deployment of the model The modeler is involved in each of these steps, but to create a successful project other business functions, including business management, underwriting, claims, field operations, marketing, and IT, need to be involved as well.
Source: 2007 Fall SIS - Predictive Modeling
Type: concurrent
Panelists: Gary Ciardiello, Martha Winslow

Practical Issues in Model Design

Textbook descriptions of model development are normally presented with rather straightforward examples. However, applying predictive modeling to insurance data presents a range of particular challenges. In this presentation some of the frequently encountered issues in developing a predictive model with insurance data will be discussed, including: 1. One of the commonly encountered insurance issues is a high proportion of missing data. Options for diagnosing the impact and adjusting a model when missing data will affect the model parameters will be presented. 2. An additional issue encountered in applying a GLM is a nonlinear pattern in a predictor variable. Options for addressing this will be presented, including the incorporation of a spline in the application of a predictor variable. 3. Interactions are often used to allow for dependencies between predictor variables. Different options for incorporating such dependencies across discrete variables, polynomials, and splines will be discussed. 4. It may not always be possible to extract information in the format best suited to model. One example of this is when a series of claim payments cannot be linked to a single claim event. We will show how the over-dispersed Poisson can be used in this example.
Source: 2007 Fall SIS - Predictive Modeling
Type: concurrent
Moderators: Sarah McNair-Grove
Panelists: Claudine Modlin, Charles Boucek

Overview of R

R is a freely available open source computer language designed for statistical computing. This session will start with an introduction to R, including where to get it, and how to use it. Next, it will illustrate some actuarial applications of R in the areas of predictive modeling for insurance pricing and reserving. Examples from real auto claim data will be discussed. Practical problems in implementing such techniques will be covered as well as the various applications of predictive modeling to the claims function. Emphasis will be placed on the personal and commercial lines of insurance. Some applications to be presented include estimating claim settlement values, estimating the impact of law changes on claim values, identifying potential fraudulent claims, and managing the claims process. In addition, an overview of the insurance claims fraud problem will emphasize claims processing and fraudulent and abusive claims detection.
Source: 2007 Fall SIS - Predictive Modeling
Type: concurrent
Panelists: Glenn Meyers, James Guszcza

How to use Predictive Modeling to Investigate Claims

For centuries, mathematical models have been constructed to represent quantitative relationships among variables, including predicting outcomes, given the presence of input variables. This session will cover the main ideas in some recently developed claim modeling approaches and discuss references for the technical details that are publicly available. The context will be predictive modeling for the decision to investigate claims for excessive medical treatment and fraud. Both supervised and unsupervised methods will be covered. Techniques will include: * Fuzzy logic and controllers * Regression-based scoring systems * The PRIDIT technique of clustering and scoring The EM technique for filling missing data and profiling medical billing patterns * Tree-based methods, including CART, TREENET, and Random Forest Examples from real auto claim data will be discussed. Practical problems in implementing such techniques will be covered as well as the various applications of predictive modeling to the claims function. Emphasis will be placed on the personal and commercial lines of insurance. Some applications to be presented include estimating claim settlement values, estimating the impact of law changes on claim values, identifying potential fraudulent claims, and managing the claims process. In addition, an overview of the insurance claims fraud problem will emphasize claims processing and fraudulent and abusive claims detection.
Source: 2007 Fall SIS - Predictive Modeling
Type: concurrent
Panelists: Roosevelt Mosley, Richard Derrig

Homeowners Insurance Scores/Disability Pricing and Dental Fraud Detection: Supervised and Unsupervised Learning

For some time now, scores using predictive modeling techniques have been used for evaluating credit and assigning tiers. But what other types of scores might be developed for use in rating a risk? The first part of this session we will examine some possibilities for the homeowners line of insurance. The second half of the session will examine disability pricing and dental fraud detection which are two very different data mining applications. Disability pricing is an example of supervised learning, meaning that for each exposure in the historic dataset the actual claim cost is known. In contrast, dental fraud detection is an example of unsupervised learning, meaning that it is unknown which claims in the historic dataset were fraudulent. The rate structure for disability insurance needs to be easily understood by management, regulators, sales force, and customers. This session will demonstrate how complex predictive modeling techniques can be applied to isolate the impact of various claim drivers on expected claim costs. The dental insurance business is characterized by high-claim volumes and low-claim amounts. Fraud is suspected to be prevalent but the low-claim amounts make manual investigation relatively expensive. Mining the claim data would seem to be a good approach. However, it is not known which historic claims are fraudulent and which aren’t. This session will detail how unsupervised learning can be an effective approach to identifying suspicious claims.
Source: 2007 Fall SIS - Predictive Modeling
Type: concurrent
Panelists: Jeffrey Kucera, Jonathan Polon

Homeowners

Many GLM ratemaking applications have focused on private passenger auto examples. This session will discuss how the nature of some homeowners' variables affects a predictive modeling analysis. These include both traditional rating variables (such as amount of insurance, deductible, and policy form) as well as external variables related to demographics or weather. The typical indivisible premium approach for analyzing homeowners' data does not lend itself well to proper investigation of these explanatory variables; therefore, the presentation will outline a case for modeling homeowners separately by peril. The panel will also survey the myriad ways various companies have incorporated this information into their rating plans, and discuss the advantages and disadvantages of various approaches.
Source: 2007 Fall SIS - Predictive Modeling
Type: concurrent
Panelists: Gaetan Veilleux

GLM Offset/Geneneralized Iteration Algorithms

The first part of the session will focus attention on the offset feature of GLMs. The offset feature gives the modeler control over the weight selected terms have in a model. The panelists will discuss various ways in which the offset feature can be creatively used to meet actuarial challenges in personal and commercial lines modeling projects. The second part of this session will present a very flexible and comprehensive iteration algorithm called generalized iteration algorithms (GIA) to build up GLM models. The algorithm is a generalization of the minimum bias approach. It will be demonstrated that GIA can be used to solve all commonly used GLM models. There are three main advantages for GIA. First, it can solve not only the commonly used GLM models, but also a broad range of GLM models. Therefore, it gives actuaries more options to fit their data. Second, the algorithm can be easily modified to solve mixed additive and multiplicative models as well as constraint-optimization problems (parameters that need to be capped) that actuaries often deal with in their practical work. Lastly, the algorithm is easy to understand and implement, and can be programmed with almost any software. A live example using Excel and Visual Basic for the algorithm will be used for the demonstration.
Source: 2007 Fall SIS- Predictive Modeling
Type: concurrent
Moderators: Sarah McNair-Grove
Panelists: Cheng-Sheng Wu, Luyang Fu, Jun Yan, Matt Flynn

GLM I: Introduction to Generalized Linear Models

Do terms such as “link function,” “exponential family,” and “deviance” intimidate you? If so, this session will help demystify generalized linear models and will provide a basic introduction to linear and GLMs. It is targeted at those who have modest experience with statistics or modeling. The session will start with a brief review of traditional linear models, particularly regression, that have been taught and widely applied for decades. Session leaders will explain how GLMs naturally arise as some of the restrictive assumptions of linear regression are relaxed. GLMs can model a wide range of phenomena, including frequencies, severities, and loss ratios, as well as the probability that a customer will renew a policy, to name just a few. The session will emphasize intuition and insight rather than mathematical calculations, which are handled by software these days, anyway!
Source: 2007 Fall SIS- Predictive Modeling
Type: concurrent
Panelists: Louise Francis, Curtis Dean

Frequency and Severity Modeling

Most predictive modelers and software packages use the Poisson and Gamma distributions for frequency and severity modeling. During this session, we will consider alternative distributions to the Poisson and Gamma and describe how these choices can affect parameter estimates. We will illustrate other useful conditional distributions for severity modeling, including one distribution that is essentially absent from the actuarial literature. Alternative modeling strategies, such as modeling the pure premium directly, will be covered. We will discuss the impact of data collection and coverage (clustering of loss values, limit, and deductible) on the model, and how to adapt the model for these situations.
Source: 2007 Fall SIS- Predictive Modeling
Type: concurrent
Panelists: Christopher Monsour, Robert Sanche

Free and Cheap Data Sources

It is widely believed that the usefulness of predictive models can be increased by incorporating external sources of data, along with company-specific data, into the project database. This session will feature a number of short presentations by users of external data. The data featured in this session is either free or costs no more than $200 to acquire. The presenters will include information about geographic, demographic, and economic and survey data. Attendees at this session will learn how to find the data and will be provided with some examples of applications of the data.
Source: 2007 Fall SIS- Predictive Modeling
Type: concurrent
Moderators: Christopher Hurst
Panelists: Louise Francis, Aleksey Popelyukhin, Christopher Monsour

Estimating Personal Auto Loss Costs that Vary by Address/Household Averaging

Postal zip codes form the basic unit for many territorial ratemaking methodologies that are in use today. Driving conditions such as chronic traffic congestion, population density, weather, and the physical environment are not constrained by zip code boundaries. The first part of this session describes how to estimate personal auto loss costs as a function of over 1,200 variables that describe local driving conditions. The method proceeds by first applying a number of variable reduction techniques, such as principal components analysis and structural equation models, to significantly reduce the number of variables. Then it fits separate frequency and severity models based on this reduced number of variables to produce loss cost estimates. The session will then describe how to analyze a holdout sample and measure the effectiveness of this methodology. Historically, operators in personal lines auto have been assigned to vehicles using outdated underwriting standards. The challenge has always been in reflecting other drivers in the household and there potential for using the insured vehicle. Over the past several years, insurers have been using predictive modeling techniques to better incorporate the information about the extra operators on to the vehicle. This second part of the session will discuss several strategies associated with this challenge within a predictive modeling framework.
Source: 2007 Fall SIS- Predictive Modeling
Type: concurrent
Panelists: Glenn Meyers, Serhat Guven