Uniform asymptotic inference and the bootstrap after model selection

59Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.

Abstract

Recently, Tibshirani et al. [J. Amer. Statist. Assoc. 111 (2016) 600-620] proposed a method for making inferences about parameters defined by model selection, in a typical regression setting with normally distributed errors. Here, we study the large sample properties of this method, without assuming normality. We prove that the test statistic of Tibshirani et al. (2016) is asymptotically valid, as the number of samples n grows and the dimension d of the regression problem stays fixed. Our asymptotic result holds uniformly over a wide class of nonnormal error distributions. We also propose an efficient bootstrap version of this test that is provably (asymptotically) conservative, and in practice, often delivers shorter intervals than those from the original normality-based approach. Finally, we prove that the test statistic of Tibshirani et al. (2016) does not enjoy uniform validity in a high-dimensional setting, when the dimension d is allowed grow.

Cite

CITATION STYLE

APA

Tibshirani, R. J., Rinaldo, A., Tibshirani, R., & Wasserman, L. (2018). Uniform asymptotic inference and the bootstrap after model selection. Annals of Statistics, 46(3), 1255–1287. https://doi.org/10.1214/17-AOS1584

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free