Browse Research
Viewing 2076 to 2100 of 7690 results
2007
The paper by Stelljes [1] the subject of this discussion is a welcome addition to the Casualty Actuarial Society literature on nonlinear regression for loss reserving. This discussion will predominantly concern a key assumption made in [1]. In particular, on page 361:
2007
Using a Claim Simulation Model for Reserving and Loss Forecasting for Medical Professional Liability
Various recent papers have included criticisms related to the use of link-ratio techniques for estimating ultimate losses. While this paper does not review these criticisms, it does outline development characteristics of medical professional liability losses that would lead the actuary to believe that link-ratio techniques may not always be the best available option for projecting ultimate losses.
2007
This paper presents a framework for stochastically modeling the path of the ultimate loss ratio estimate through time from the inception of exposure to the payment of all claims. The framework is illustrated using Hayne's lognormal loss development model, but the approach can be used with other stochastic loss development models.
2007
Motivation: Recent focus on corporate governance (e.g., Sarbanes-Oxley) in the United States and the use of predictive modeling techniques in the property/casualty insurance industry, have raised the profile of data management and data quality issues in the actuarial profession.
Method: Representatives of the Insurance Data Management Association (IDMA) identified seven data management texts they felt would be most helpful for actuaries.
2007
The current literature describes pricing and reserving of medical malpractice insurance as written on either an occurrence or a claims-made basis. In current practice, many policies allow the reporting of incidents before a claim is submitted, to attach the claim to the current claims-made policy. This creates experience with characteristics of both types of experience.
2007
In actuarial literature, researchers suggested various statistical procedures to estimate the parameters in claim count or frequency model. In particular, the Poisson regression model, which is also known as the Generalized Linear Model (GLM) with Poisson error structure, has been widely used in the recent years.
2007
There are numerous articles extolling the future potential of the insurance industry in The People's Republic of China (hereafter referred to as China) since the implementation of economic market reforms. While the current size of the Chinese insurance market is much smaller than many industrialized nations, the rate of premium growth is among the highest in the world.
2007
This paper is a case study of the quality of clinical judgment in loss reserving for Commercial Auto Liability in the U.S. for accident years 1995 through 2001. Research on clinical vs. statistical prediction in non-insurance fields indicates that relatively simple models frequently produce better results than human experts with access to the same information. To test the quality of clinical judgment vs.
2007
Semi-parametric mixture models have well documented technical advantages for modeling loss distributions. These technical advantages are documented in papers that focus on the estimation of the parameters of semi-parametric models.
2007
This paper is a review and case study of Butsic's expected policyholder deficit (EPD) framework for measurement and maintenance of risk-based capital adequacy for property-casualty insurance companies, the promise of which is that long term solvency protection can be achieved by periodic assessment and adjustment of risk-based capital using a consistent and short time horizon, e.g., one year, for risks on both sides of the balance sheet.
2007
One method of simulating random variables is to generate uniform 0 to 1 variables, then use them in the inverse of the cumulative distribution function of the random variable you want to simulate.
2007
2007 Summer CAS E-Forum Welcome to the new CAS E-Forum. The E-Forum replaces the traditional printed Forum as the means to disseminate non-refereed research papers to the actuarial community. The CAS will no longer distribute the Forum in hard copy format. The CAS is not responsible for statements or opinions expressed in the papers in the E-Forum.
2007
While data quality problems are widespread, it is rare for an event to take place that provides a high-profile example of how questionable information quality can have a worldwide business effect. The 2000 US Presidential election and the subsequent confusion around the Florida recount highlights the business need for high quality data. The 2000 election illustrated at least three types of potential data quality issues.
2007
The main theme in this book is that data is a material for informational product and (like in manufacturing) the quality of the product is determined by customer satisfaction. According to the book, everyone in the organization has a role in establishing and maintaining information quality to deliver a quality product to the customer.
2007
2007 Spring Forum including a Reinsurance Call Paper These files are in Portable Document Format (PDF), you will need to download the Acrobat Reader to view the articles. Table of Contents Download Entire Volume 2007 Reinsurance Call Paper We're Skewed—The Bias in Small Samples from Skewed Distributions by Kirk G. Fleming, FCAS, MAAA
2007
The competitive nature of the technology indusstry has made the field of software testing increasingly important. While software development is not a major necessity for an actuary's work product, we are called from time to time to develop software products for our clients.
Software Testing in the Real World was derived by the author's, Edward Kit, experiences with "real-world" testing concerns and issues.
2007
This book focuses on data accuracy, which the author sees as the foundation for the measurement of the quality of data among other dimensions: compelteness, relevance, timeliness, understood, and trusted. After providing a thorough understanding the basic concepts of data inaccuracies and accuracies, the author outlines the basic elements and the fuctions of a data quality assurance program.
2007
Distortion risk measures including the Value-at-Risk and the Tail-VaR are currently suggested by the regulators in Finance (Basel II) and Insurance (Solvency II). We introduce nonparametric estimators of the sensitivity of distortion risk measures (DRM) with respect to portfolio allocation.
2007
Claim provisions are crucial for the financial stability of insurance companies. This is why actuarial literature has proposed several claim reserving methods, which are usually based on statistical concepts. However, the mutant and uncertain behaviour of insurance environments does not make it advisable to use a wide database when calculating claim reserves, and so it in fact makes the use of Fuzzy Set Theory very attractive.
2007
A conditional one-factor model can account for the spread in the average returns of portfolios sorted by book-to-market ratios over the long run from 1926 to 2001. In contrast, earlier studies document strong evidence of a book-to-market effect using OLS regressions over post-1963 data.