Actually this title is just a come-on. (I got my inspiration from an article by
Andrew Tobias--the *consumer advocate*--entitled, *Ralph Nader is a
Big Fat Idiot*.) I've long thought that the R2 statistic is misused in many
contexts: as one example in filings justifying a trend pick, or in
comparing how well two lines *fit*. Lee Barclay wrote an article a while
ago on some of the problems and there are others. I'd like to suggest that
in many contexts when, for example, we talk about how good the curve
fits the data points what we mean is: How confident we are in the slope
of the line? Is the 4% pure premium trend we selected in our filing
reasonable given the data points? Which one of these possible trend
picks is more reasonable given this set of points? What we should then
provide is either a confidence interval around the slope estimate or
(equivalently) a standard deviation.*
It is easy to give an example of two lines with identical confidence
intervals and with very different R2 s. And a line with a relatively low R2,
because it is relatively horizontal, might actually be a very good fit. It is
the standard deviation of the slope estimate(or perhaps if we are
comparing different scales the *CV*: the standard deviation over the
estimate) that we should use, or a confidence interval depending on the
purpose and audience. I've been speaking of a vanilla regressions, but
this can be extended in many cases (E.g. multiple regressions and non
linear*in the variables*where comparisons of R2s are again
problematic for more reasons.)
Not that there aren't plenty of occasions where the R2 is the right
statistic to use; but there are plenty where it isn't.
I haven't produced any of the math because 1) I don't want to use too
much space and I'm lazy 2) Most of you will be able to convince yourself
of the validity of (the math part of) my position by looking at a few simple
formulas. 3) All communications on this will be archived on the CAS and I
may put more stuff there; if I can convince him to agree I may also put an
interesting back and forth on this issue that I had with another actuary.
In any case I will put a very simple spreadsheet out there with a
stochastic error term at Webpage at
http://www.casact.org/library/library.htm#prog, (keep hitting F9)
showing this for regressions that one might typically find in a filing.
My proposal is this: In filings let's start providing the standard deviations
and confidence intervals of the slope estimate rather than R2s (I
suppose at first we'll have to do both to get departments used to the
idea). Similarly for internal management reports in many cases.
*Note that this is not the answer to the question of how good a
regression curve is, in the sense of meeting the regression assumptions.
Say you have many y values for each x value. We could have an
excellent model where the dispersion of the error term is very wide (i.e.
a lot of noise); in this case the R2 will be relatively large as will the
confidence interval. Nevertheless the regression line could go right
through the means of all the y's (for each x). What are we interested in?
Are we interested in predicting how well the line *fits*, or explains the
variation in, the individual data points. In that case this model has a lot of
noise, and either we're stuck (maybe we even generated the points
based on the regression assumptions), or maybe there's another model
out there somewhere that *explains* a lot more. On the other hand if we
are interested in how well the line fits to the means of the y's for each x
(and there are tests to measure this) then we've got a good model.
Visit the CAS Web Site at http://www.casact.org
To subscribe or unsubscribe from CASNET:
Send an e-mail to firstname.lastname@example.org
Type in the body join casnet to subscribe
or leave casnet to unsubscribe.