Abstract
The computer-intensive statistical methodology of subsampling has received considerable attention since the early 1990s when it was discovered that it is generally valid under minimal assumptions. Indeed, subsampling provides a robust alternative to the bootstrap, whose consistency may fail unless problem-specific regularity conditions hold true. The goal of this book is to provide a rigorous foundation for the theory and practice of subsampling. The asymptotic consistency of subsamlying distribution estimation is shown under extremely weak conditions, including cases where the bootstrap fails. Consistent estimation of the sampling distribution of a statistic allows for the construction of asymptotically valid inferential procedures, such as confidence intervals and hypothesis tests. The crux of the method relies on recomputing a statistic over appropriate subsamples of the data, and using these recomputed values to build up a sampling distribution. The usual context of independent and identically distributed observations is considered as well as more ocmplex data structures, such as stationary and nonstationary time series, random fields in discrete or continuous time, and marked point processes. Further issues treated in the book include variance estimation; subsampling with unknown rate of convergence of the estimator in question; optimal choice of the subsample size in practice; extrapolation, interpolation, and higher-order accuracy of subsampling distribution and variance estimators; and subsamling for self-normalized statistics. We consider a financial application by using subsampling to discuss whether stock returns can be predicted from dividend yields.
Year
1999
Categories
RPP1
Publications
Springer-Verlag New York Inc.