Search Presentations

The presentation materials are offered in connection with CAS professional education offerings. © 2022 Casualty Actuarial Society. All Rights Reserved. The presentation materials may contain copyrighted content the use of which has not been specifically authorized by the copyright owner. You are permitted to view and print the materials for personal/professional noncommercial research purposes. Except for the foregoing, you agree not to reproduce, distribute, modify, create derivative works, or commercially exploit the presentation materials without prior written permission from CAS. Please direct any copyright permission inquiries regarding use of the presentation materials to acs@casact.org.

Viewing 4676 to 4700 of 6735 results
STAY TUNED! If you are anticipating additional search filters by attribute and level to align with the CAS Capability Model, it is coming later this Summer. As the CAS begins to code recorded sessions by specific attributes and levels (starting with the 2023 Annual Meeting), these will be tagged in the CAS database of presentations going forward and should be searchable.

But you may use the Capability Model now to help you identify topics. For example, if you want to move up one level under the content area “Functional Expertise,” you may search topics in the particular functional area to expand your knowledge.

Recorded content is searchable by Capability Model attribute and level in the CAS Online Library.

Understanding Capital and When You Really Need It - Lessons Learned or Not Learned From Sub Prime

The subprime crisis negatively affected a lot of different industries, including the insurance industry. Obviously those companies that had a better understanding of the risks to their capital were better able to weather the storm. What have we learned from this experience, and are there other, similar type problems lurking in the industry, i.e., what other potential areas may adversely affect companies? And what steps, if any, should we be taking?
Source: 2008 Annual Meeting
Type: general
Moderators: Alexander Shipilov
Panelists: Michael Schmitz, Dave Ingram, Don Mango

Trends and Issues in the D&O and E&O Marketplace

Every time a business upheaval occurs there are subsequent impacts on the D&O and E&O markets. First it was Enron, Tyco and other corporate failures with the resulting Sarbanes-Oxley Legislation and now it’s the subprime mess, the resulting credit crunch and subsequent financial market upheaval. Learn about the most recent trends in the E&O and D&O marketplace, as well as, the many challenges that are faced in reserving for these products.
Source: 2008 Casualty Loss Reserve Seminar (CLRS)
Type: concurrent
Moderators: Alexander Shipilov
Panelists: Michael McManus, Athula Alwis, Joel Townsend
Keywords: D&O and E&O

Risk-Based Financial Management of Insurance Companies

Recent publications by some multinationals evidence are increasing use of a risk-based framework for the financial management of insurance companies. Long a staple for the management of financial assets, the movement toward risk-based management of insurance liabilities -- which began on the life side in Europe -- is expanding to P&C and life companies, both inside and outside the U.S. Drivers of this movement include increasing rating agency focus on ERM, the European CFO Forum, Solvency II and like initiatives in Europe, emergence of fair-value accounting, and evolving international accounting standards for insurance contracts. This panel will highlight key concepts of a risk-based framework relative to a Statutory or GAAP framework, will compare risk-based performance metrics with more traditional metrics, and will discuss some of the challenges in the application of risk-based concepts in actual practice.
Source: 2008 Casualty Loss Reserve Seminar (CLRS)
Type: concurrent
Moderators: Paul Horgan
Panelists: Thomas McIntyre, Dominique Lebel
Keywords: Financial Management, Risk-Based

COPFLR Issue Brief on Reserve Variability

The recently published COPLFR Issue Brief on Reserve Variability describes how uncertainty in the value of unpaid claim liabilities in property/casualty insurance has significant impacts on financial reporting for insurance enterprises, directly affecting earnings and book value and ultimately impacting rating agency views and management's business decisions. This uncertainty often leads actuaries to calculate a range of unpaid claim estimates, commonly referred to as a "reserve range," rather than just a single estimate, as a way of dealing with and/or communicating this uncertainty. In this session we will describe why a range or distribution may be used and, in particular, the use of a "reasonable range", describe the types of ranges and distributions that exist and why and how such unpaid claim estimates are developed, and discuss the characteristics that define transparent and understandable disclosures of unpaid claim estimates.
Source: 2008 Casualty Loss Reserve Seminar (CLRS)
Type: concurrent
Moderators: Paul Horgan
Panelists: Marc Oberholtzer, Kristi Carpine-Taber
Keywords: Reserve Variability, COPFLR

A Follow-up Session on Actuarial Standards of Practice 43

The ASB adopted property/casualty unpaid claim estimates standard (ASOP 43) to be effective September 1, 2007. The panelists will discuss the main elements of the standard and present some examples in the form of case studies to elicit discussion.
Source: 2008 Casualty Loss Reserve Seminar (CLRS)
Type: concurrent
Moderators: Paul Horgan
Panelists: R. Muth, Jason Russ
Keywords: Actuarial Standards of Practice 43

Integrating Reserve Risk Models into Economic Capital Models

For property/casualty insurers, reserve risk makes up a significant portion of the overall risk of the enterprise. As economic capital (EC) models evolve and get more sophisticated, it is normal to expect that reserve risk models will drive some of that evolution. The presenters for this session will present an overall critique of existing methods and present newly developed approaches which address most of the weaknesses observed in some of these existing methods. The panelists will also explain how the results of their reserve risk models have been integrated into EC models.
Source: 2008 Casualty Loss Reserve Seminar (CLRS)
Type: concurrent
Moderators: Paul Horgan
Panelists: Francois Morin, Stuart White
Keywords: Reserve Risk Models, Economic Capital Models

Reserving from a Reinsurers Perspective

While reinsurance reserving principles are generally similar to primary reserving, applying them is often more difficult and involves special considerations. This panel will discuss some of the considerations, issues, and challenges faced by actuaries performing reinsurance reserving. These include data segmentation, changes in terms and conditions, industry benchmarks, reserve ranges, and integrating internal company information such as claims data, underwriting considerations, accounting and pricing.
Source: 2008 Casualty Loss Reserve Seminar (CLRS)
Type: concurrent
Moderators: Jorge Montepeque
Panelists: Christopher Bozman, Jon Levy
Keywords: Reinsurers, Reserving

II-Investigating and Detecting Change

This session will explore a variety of techniques to detect and address changes in mix of business, claim closing patterns, and case reserve adequacy. When changes in history are verified through discussion with claim, underwriting, reinsurance, and field staff, the actuary can pick the right tool for the job. Adjustments of loss reserve methodologies to account for each situation will also be discussed.
Source: 2008 Casualty Loss Reserve Seminar (CLRS)
Type: concurrent

Fundamentals of Effective Project Management

Gain a practical approach to managing the resources, people, deadlines and real-world challenges required to bring any project in on time, on target and on budget.
Source: 2008 Casualty Loss Reserve Seminar (CLRS)
Type: concurrent
Moderators: Jorge Montepeque
Panelists: Kai Rambow
Keywords: Project Management

Paper Session

"Clustering in Ratemaking: Application in Territories Clustering" by Ji Yao Survey of state-of-the-art clustering methods and their applications in insurance ratemaking are presented in the first part of session. The reason for clustering and the consideration in choosing clustering methods in insurance ratemaking are discussed. Then six types of clustering methods are reviewed and particularly the problem of applying these methods directly in insurance ratemaking is discussed. To alleviate some of these problems, an exposure-adjusted hybrid (EAH) clustering method is proposed in the second part of session. The proposed method used along with Generalized Linear Model provides a practice solution to ratemaking of rating factors that has many levels, such as territory. The rational of the method is discussed and this method is illustrated step by step using the U.K. motor data. The limitations and other considerations of clustering are followed in the end. "Territory Analysis with Mixed Models and Clustering" by Eric J. Weibel and J. Paul Walsh Territory as it is currently implemented is not a causal rating variable. The actual causal forces that drive the geographical loss generating process (LGP) do so in a complicated manner. Both the loss cost gradient (LCG) and information density (largely driven by the geographical density of exposures and by loss frequency) can change rapidly, and at different rates and in different directions. This makes the creation of credible homogenous territories difficult. Auxiliary information that reflects the causal forces at work on the geographical LGP can provide useful information to the practitioner. Furthermore, since the conditions that drive the geographical LGP tend to be similar in proximity, the use of information from proximate geographical units can be helpful. However, to date procedures for incorporating auxiliary information involve the subjective consideration of conditions. And the use of proximate experience as a complement is complicated by complex patterns taken on by the LCG in relation to information density. Spline and graduation methods implicitly incorporate this information, but they tend to be applied ad-hoc to different regions. Incorporating a complement of credibility via proximate geographical units is only discussed formally in two papers, and is fairly undeveloped as a method. Another problem involves determining the relative value of information obtained via proximity versus the information provided by auxiliary variables. Separately, the implementation of territory as a categorical variable has prevented the integration of Territory Analysis with the parameterization of the remainder of the classification plan. In addition to these actuarial problems, territory's lack of causality creates acceptability problems. Lack of causality and increasingly complex territorial definitions have also reduced jurisdictional loss control incentives. The newly promulgated Proposition 103 regulations in California provide a useful venue for investigating solutions to these problems.
Source: 2008 Fall SIS- Predictive Modeling
Type: Paper
Moderators: Jorge Montepeque
Panelists: Eric Weibel, Ji Yao, Paul Walsh
Keywords: Clustering in Ratemaking

Predictive Analytics for Detecting Suspicious Claims

Predictive analytics is the key to effective business process management in claims operations. One of the longstanding uses of analytics is to sort complex event processes into segments for differentiated attention by the claims department staff and their colleagues and possibly even third party vendors. This session will take a detailed look at the data in this process along with external information and then show how to combine data sources for creating a business alerting management framework for taking action on suspicious claims but from an individual claimant point of view as well as looking at collusive networks of organized fraud.
Source: 2008 Fall SIS- Predictive Modeling
Type: concurrent
Moderators: Jorge Montepeque
Panelists: Richard Derrig, Molly Bhattacharyya

Price Optimization - Theory and Practice

Recent years have seen an increase in the sophistication of predictive modeling of claims and, more recently, policyholder retention and conversion rates. Often, however, relatively manual techniques are then used to derive the actual rates to apply in practice, failing to leverage the full potential of the underlying analyses. This session will explore how sophisticated price optimization methods can be used to determine rates that best match an insurer's strategic profit and growth objectives. The session will * describe some of the technical aspects of price optimization * outline some potential pitfalls to avoid * discuss real-world implementation challenges and solutions * discuss case studies of how such methods can improve performance in practice.
Source: 2008 Fall SIS- Predictive Modeling
Type: concurrent
Moderators: Jorge Montepeque
Panelists: James Tanser

Hierarchical Modeling/Offset Practitioners Guide

Offsets are an important tool for the manager or analyst of many GLM modeling exercises. The most important questions about offsets are “when do you use it?”, “how do you actually code it?”, and “what are the snares to look out for?”. This session will provide some do’s and don’t’s, as well as show some SAS code and the actual results. Formulas and theory will be relegated to at most one slide, way in the back. The Multilevel/Hierarchical - a.k.a. "mixed effects" - modeling framework is a powerful and intuitive generalization of the linear modeling framework that has become a cornerstone of much actuarial work. Yet it has received relatively scant attention in actuarial publications and seminars. Hierarchical models apply when one's data is naturally structured in groups (e.g. repeated observations for each policy, policies within territories, etc) and one would like one's model coefficients to reflect this group structure. This is achieved by allowing (some of) the model parameters to vary by group. This session will sketch some fundamental concepts of hierarchical models, discuss some of their many practical applications in actuarial science, and draw a connection between the theory of hierarchical models and Bayesian Credibility theory.
Source: 2008 Fall SIS- Predictive Modeling
Type: concurrent
Panelists: Keith Holler, James Guszcza, Bill Stergiou

Communicating Predictive Modeling Results

Predictive Modeling is highly technical work. The successful implementation of a predictive modeling project often relies on communicating project results to a less technical audience. Graphical presentation of results is thus a key communication tool for predictive modeling work. In this session, the presenters will draw on their experience from a variety of predictive modeling projects in order to demonstrate a number of graphical presentation methodologies that they have found critical for proper presentation of model results. This will include techniques to understand key aspects of the data, identify and analyze predictor variables and summarize key model results to senior management. Selected elements of the presentation will be in case study format.
Source: 2008 Fall SIS- Predictive Modeling
Type: concurrent
Moderators: Jorge Montepeque
Panelists: Louis Mak

Commercial Lines Predictive Modeling for Workers Compensation

Predictive modeling is proving to be as successful in commercial lines as it has been in personal lines. Many of the challenges that face commercial lines modelers, however, are different and must be addressed properly in order for the results to be reliable. This session will discuss the major challenges in workers compensation modeling, from data to deployment, and will offer approaches to address these challenges.
Source: 2008 Fall SIS- Predictive Modeling
Type: concurrent
Moderators: Jorge Montepeque
Panelists: Martin Ellingsworth, Philip Borba, Cheng-Sheng Wu, Karthik Balakrishnan

Use of GLM in Rate Filings

What makes a good rate filing when GLM or other predictive modeling methods are the basis for the analysis? How can you sufficiently demonstrate that the results of predictive modeling produce rates that are neither excessive, inadequate, nor unfairly discriminatory? Both the regulatory and the company view will be represented. A lively discussion should ensue!
Source: 2008 Fall SIS- Predictive Modeling
Type: concurrent
Moderators: Jorge Montepeque
Panelists: Kenneth Creighton

Software and Utilities: Free or Inexpensive

The open source revolution has made available a number of very useful software packages and utilities to data analysts. This session will feature free and inexpensive software that can augment the commercial tools used by predictive modelers. The session will describe where to obtain the software and will illustrate how to use it. Data used for illustration of the software and utilities will be made available to session participants. The software featured in this session will include: An R add-in that can be used to read data into the free software package R directly from Excel or Access. Note than one of R’s inconveniences is that data must typically be converted into a text file before it can be used. Dataferret; free software that can be used to download census and other federal data from US federal government databases. ROOT; a free data analysis tool developed by CERN, the lab where the world wide web has developed. ROOT contains several tools that perform exploratory data analysis, fitting and reporting.
Source: 2008 Fall SIS- Predictive Modeling
Type: concurrent
Moderators: Jorge Montepeque
Panelists: Matthew Flynn, Christopher Monsour, Ravi Kumar

Commercial Lines Predictive Modeling for Commercial Auto

The commercial automobile insurance product line offers a unique predictive modeling opportunity. On one hand, many of the factors and data sources developed for personal auto can be used in commercial auto. Credit scores, vehicle identification numbers (VINs), motor vehicle records (MVRs), and territory refinement all make the transition to commercial auto quite well. On the other hand, the unique characteristics of commercial auto risks offer some new opportunities. Factors such as industry classification, trailer type, and truck-to-car ratio all contribute to the dynamic world of commercial auto predictive modeling.
Source: 2008 Fall SIS- Predictive Modeling
Type: concurrent
Panelists: Robert Walling, David Otto

Project Management for Predictive Models

The use of predictive modeling tools continues to expand within the property/casualty insurance industry. The need to manage both the development and implementation of these complex tools is critical to accomplishing the goals for the business unit. This session will discuss the aspects of managing a complex project and the issues to consider for successful implementation.
Source: 2008 Fall SIS- Predictive Modeling
Type: concurrent
Panelists: Jonathan White, John Baldan

Survival Modeling and Its Application in Pricing/Demand Modeling

The first portion of this session will discuss Survival Modeling: Currently, insurance pricing predominantly focuses on accurate loss estimation. Next generation pricing models need to focus on other critical elements which depend heavily on customer lifetime behavior - acquisition costs, retention and price elasticity. Insurers that develop the most appropriate pricing models for all these components stand to have significant competitive advantage. An important component of these models is estimated customer lifetime, which can be studied best by use of survival analysis. Survival modeling triumphs over other techniques like logistic and normal regression: * It can handle censored data, i.e. all customers do not attrite * It gives us complete information on both the occurrence (or not) and timing of an event, unlike logistic which tells us the occurrence only * It takes into account dynamic variables, like age, salary levels, which changes during the observation window Of course, these sophisticated models will provide only limited business value if they can not be integrated into the core rating and pricing processes. Only by operationalizing the models, can insurers consistently and proactively realize the benefits. As part of the development process, there should be a parallel track that is looking at the system application, regulatory and operational issues that need to be considered so that the analytical results can be adapted to these business requirements and ultimately implemented. We recommend that insurers embrace new survival analysis techniques now available to gain an advantage in terms of pricing, and also in other areas of marketing and strive to implement them rapidly in the systems that drive pricing. The second portion of this session will discuss Demand Modeling: Predictive modeling has gained widespread acceptance within the North American insurance industry as a means to estimate loss costs. Unlike other industries that have applied practice to understand customer response, the insurance industry has yet to widely use predictive modeling for this purpose. This session will cover the use of multivariate techniques to study and predict outcomes such as response rate, policyholder retention, and new business conversion. The panel will provide practical tips and illustrative results associated with modeling customer response data. Furthermore, the panel will address the benefits and applications of modeling customer response, in particular, the issues and opportunities associated with combining loss-cost models and customer-response models to determine optimal prices.
Source: 2008 Fall SIS- Predictive Modeling
Type: concurrent
Moderators: Jorge Montepeque
Panelists: Serhat Guven, Ridesh Aggarwal, Arnab Dey

Data Mining Database Design

Eighty percent of predictive modeling effort is just dealing with data. The amount of data used in Predictive Modeling is doubling every two years. Commercial databases and IT are barely playing catch-up with this ever growing demand for data. The keys to a successful Predictive Modeling project are good organization of data and efficient access to data. In this session we will touch upon themes like Data Organization, Scalability, Reliability, Administration and Distribution. In particular, we will discuss the following topics: the need for a centralized data store used just for PM, physical design versus logical design, ability for each user to have her own workspace that defines her own view of the data, ability to accommodate unpredictable data access pattern, the need to have 24/7 availability. Multivariate models often run the risk of over fitting because of the curse of dimensionality. Thus, the model's predictive performance is very much dependent on selecting an optimum set of predictive variables that defies this curse. This second half of the session will present the following topics within the context of variable selection: Considering Business Issues first, Using fast algorithms to quickly get a baseline list, Creating additional variables (Features) to take care of, Problems with Nonlinear relationship with target, Problems Correlation between variables, Interdependence between variables, Taking care of Outliers, introducing fake variables into the mix, Importance of using diverse statistical techniques (Forward & Backward Selection, Linear Regression, Binary trees, Neural Networks and others), Importance of Boot Strapping and Cross validation in variable selection.
Source: 2008 Fall SIS- Predictive Modeling
Type: concurrent
Moderators: Jean Desantis
Panelists: Cheng-Sheng Wu, Ravi Kumar, Jeffrey White

Personal Auto Predictive Modeling Update

The highly competitive personal automobile marketplace continues to be the laboratory for experimentation in the development of innovative rate classification and tiering rating plans. This session will cover the basics of using predictive modeling techniques to analysis a company’s experience including data concerns, use of external data, practical considerations, and so on. New variables which have recently been seen in the marketplace will be discussed. Vehicle rating plans including the use of predictive modeling based upon vehicle considerations will also discussed in detail.
Source: 2008 Fall SIS- Predictive Modeling
Type: concurrent
Moderators: Abbe Bensimon
Panelists: Roosevelt Mosley, Ellen Fitzsimmons

Price Optimization: A European Case Study Done the American Way

This session will discuss a case study of a price optimization analysis that was performed for personal automobile in a European country. We have added a pseudo-American rating plan to this analysis and show the effect on a carrier's profit and retention of optimizing certain of the rating variables. The session will include discussions of the process of collecting data and performing the analysis, from the actuarial and the software perspectives. This session will also discuss the structure of price optimization process and its major components. We will focus on issues and challenges for data, modeling and implementation. The contents of the presentation will related to price optimization for both personal lines and commercial lines.
Source: 2008 Fall SIS- Predictive Modeling
Type: concurrent
Moderators: Abbe Bensimon
Panelists: Lee Bowron, Mo Masud, Jun Yan, Terry Quakenbush

GLM II

GLM I provided the case for using GLMs and some basic GLM theory. GLM II will be a practical session outlining basic modeling strategy. The discussion will cover topics such as overall modeling strategy, selecting an appropriate error structure and link function, simplifying the GLM (i.e., excluding variables, grouping levels, and fitting curves), complicating the GLM (i.e., adding interactions), and validating the final model. The session will discuss diagnostics that help test the selections made.
Source: 2008 Fall SIS- Predictive Modeling
Type: concurrent
Panelists: Claudine Modlin

Text Mining on Unstructured Data

It is generally believed that about 80% of all data is unstructured data. Unstructured data is data that is not contained in specified formats with values that have a definite meaning to users and typically it is found in a computerized database. Unstructured data includes: free form claim description fields in a claims databases, underwriter notes in an underwriting file, the title and contents of emails, answers to open ended survey questions and words or phrases typed into a search engine. Unstructured data is often ignored when performing predictive modeling analyses. This session will give an overview of the some kinds of unstructured data and how they can be used in predictive modeling. Specific examples of the application of text mining will be provided. It will give some background on the methods used with unstructured data. Presenters will also provide references and discuss some of the key literature on the topic.
Source: 2008 Fall SIS- Predictive Modeling
Type: concurrent
Moderators: Abbe Bensimon
Panelists: Martin Ellingsworth, Karthik Balakrishnan