James–Stein for the leading eigenvector

10Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent research identifies and corrects bias, such as excess dispersion, in the leading sample eigenvector of a factor-based covariance matrix estimated from a high-dimension low sample size (HL) data set. We show that eigenvector bias can have a substantial impact on variance-minimizing optimization in the HL regime, while bias in estimated eigenvalues may have little effect. We describe a data-driven eigenvector shrinkage estimator in the HL regime called “James–Stein for eigenvectors” (JSE) and its close relationship with the James–Stein (JS) estimator for a collection of averages. We show, both theoretically and with numerical experiments, that, for certain variance-minimizing problems of practical importance, efforts to correct eigenvalues have little value in comparison to the JSE correction of the leading eigenvector. When certain extra information is present, JSE is a consistent estimator of the leading eigenvector.

Cite

CITATION STYLE

APA

Goldberg, L. R., & Kercheval, A. N. (2023). James–Stein for the leading eigenvector. Proceedings of the National Academy of Sciences of the United States of America, 120(2). https://doi.org/10.1073/pnas.2207046120

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free