In this paper, a new family of resampling-based penalization procedures for model selection is defined in a general framework. It generalizes several methods, including Efron’s bootstrap penalization and the leave-one-out penalization recently proposed by Arlot (2008), to any exchangeable weighted bootstrap resampling scheme. In the heteroscedastic regression framework, assuming the models to have a particular structure, these resampling penalties are proved to satisfy a non-asymptotic oracle inequality with leading constant close to 1. In particular, they are asympotically optimal. Resampling penalties are used for defining an estimator adaptingsimultaneouslyto the smoothness of the regression function and to the heteroscedasticity of the noise. This is remarkable because resampling penalties are general-purposedevices, which have not been built specifically to handle heteroscedasticdata. Hence, resampling penaltiesnaturally adapt to heteroscedasticity. A simulation study shows that resampling penalties improve on V-fold cross-validation in terms of final prediction error, in particular when the signal-to-noise ratio is not large. © 2009, Institute of Mathematical Statistics. All rights reserved.
CITATION STYLE
Arlot, S. (2009). Model selection by resampling penalization. Electronic Journal of Statistics, 3, 557–624. https://doi.org/10.1214/08-EJS196
Mendeley helps you to discover research relevant for your work.