We place ourselves in the setting of high-dimensional statistical inference where the number of variables p in a dataset of interest is of the same order of magnitude as the number of observationsn. We consider the spectrum of certain kernel random matrices, in particular n × n matrices whose (i, j )th entry is f (X i Xj /p) or f (∥Xi - X j ∥2/p) where p is the dimension of the data, and Xi are independent data vectors. Here f is assumed to be a locally smooth function. The study is motivated by questions arising in statistics and computer science where these matrices are used to perform, among other things, nonlinear versions of principal component analysis. Surprisingly, we show that in highdimensions, and for the models we analyze, the problem becomes essentially linear-which is at odds with heuristics sometimes used to justify the usage of these methods. The analysis also highlights certain peculiarities of models widely studied in random matrix theory and raises some questions about their relevance as tools to model high-dimensional data encountered in practice. © 2010 Institute of Mathematical Statistics.
CITATION STYLE
Karoui, N. E. (2010). The spectrum of kernel random matrices. Annals of Statistics, 38(1), 1–50. https://doi.org/10.1214/08-AOS648
Mendeley helps you to discover research relevant for your work.