A Common Subtle Error: Using Maximum Likelihood Tests to Choose between Different Distributions

Abstract
The Maximum Likelihood Estimation (MLE) is one of the most popular methodologies used to fit a parametric distribution to an observed set of data. MLE’s popularity stems from its desirable asymptotic properties. Maximum Likelihood (ML) estimators are consistent, which means that as the sample size increases the researcher becomes increasingly confident of obtaining an estimate that is sufficiently closer to the true value of the parameter; they are asymptotically normal with the lowest possible variance (achieve the Cramer-Rao lower bound on variance), which makes inference tests relatively easy and statistically more powerful. In addition, they are translation-invariant, which means that all functions of the ML estimates are by default the MLE predictors of the respective functions. For instance, if a pricing analyst computes pure premium from relativities estimated by MLE, the predicted pure premium is also an ML estimate and, hence, satisfies all the desirable aforementioned properties.

Keywords: Loss Distributions

Volume
Summer, Vol 2
Page
1-2
Year
2012
Categories
Financial and Statistical Methods
Loss Distributions
Publications
Casualty Actuarial Society E-Forum