Sign up & Download
Sign in

Heuristics of instability and stabilization in model selection

by Leo Breiman
The Annals of Statistics ()

Abstract

In model selection, usually a ‘‘best’’ predictor is chosen from a collection mˆ?, s.4 of predictors where mˆ?, s. is the minimum least-squares predictor in a collection Us of predictors. Here s is a complexity parameter; that is, the smaller s, the lower dimensionalrsmoother the models in Us. If L is the data used to derive the sequence mˆ?, s.4, the procedure is called unstable if a small change in L can cause large changes in mˆ?, s.4. With a crystal ball, one could pick the predictor in mˆ?, s.4 having minimum prediction error. Without prescience, one uses test sets, crossvalidation and so forth. The difference in prediction error between the crystal ball selection and the statistician’s choice we call predictive loss. For an unstable procedure the predictive loss is large. This is shown by some analytics in a simple case and by simulation results in a more complex comparison of four different linear regression methods. Unstable procedures can be stabilized by perturbing the data, getting a new predictor sequence mˆX ?, s.4 and then averaging over many such predictor sequences.

Cite this document (BETA)

Readership Statistics

102 Readers on Mendeley
by Discipline
 
 
 
by Academic Status
 
44% Ph.D. Student
 
9% Post Doc
 
9% Assistant Professor
by Country
 
15% United States
 
3% France
 
3% Sweden

Sign up today - FREE

Mendeley saves you time finding and organizing research. Learn more

  • All your research in one place
  • Add and import papers easily
  • Access it anywhere, anytime

Start using Mendeley in seconds!

Already have an account? Sign in