Heuristics of instability and stabilization in model selection

805Citations
Citations of this article
375Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In model selection, usually a "best" predictor is chosen from a collec-tion {μ̂(·, s)} of predictors where μ(·, s) is the minimum least-squares predictor in a collection script U signs of predictors. Here s is a complexity parameter; that is, the smaller s, the lower dimensional/smoother the models in script U signs. If ℒ is the data used to derive the sequence {μ̂(·, s)}, the procedure is called unstable if a small change in ℒ can cause large changes in {μ̂(·, s)}. With a crystal ball, one could pick the predictor in {μ̂(·, s)} having minimum prediction error. Without prescience, one uses test sets, cross-validation and so forth. The difference in prediction error between the crystal ball selection and the statistician's choice we call predictive loss. For an unstable procedure the predictive loss is large. This is shown by some analytics in a simple case and by simulation results in a more complexcomparison of four different linear regression methods. Unstable procedures can be stabilized by perturbing the data, getting a new predictor sequence {μ̂′(·, s)} and then averaging over many such predictor sequences.

Cite

CITATION STYLE

APA

Breiman, L. (1996). Heuristics of instability and stabilization in model selection. Annals of Statistics, 24(6), 2350–2383. https://doi.org/10.1214/aos/1032181158

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free