On cross-validation for sparse reduced rank regression

14Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In high dimensional data analysis, regularization methods pursuing sparsity and/or low rank have received much attention recently. To provide a proper amount of shrinkage, it is typical to use a grid search and a model comparison criterion to find the optimal regularization parameters. However, we show that fixing the parameters across all folds may result in an inconsistency issue, and it is more appropriate to cross-validate projection–selection patterns to obtain the best coefficient estimate. Our in-sample error studies in jointly sparse and rank deficient models lead to a new class of information criteria with four scale-free forms to bypass the estimation of the noise level. By use of an identity, we propose a novel scale-free calibration to help cross-validation to achieve the minimax optimal error rate non-asymptotically. Experiments support the efficacy of the methods proposed.

Cite

CITATION STYLE

APA

She, Y., & Tran, H. (2019). On cross-validation for sparse reduced rank regression. Journal of the Royal Statistical Society. Series B: Statistical Methodology, 81(1), 145–161. https://doi.org/10.1111/rssb.12295

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free