Functional data analysis is intrinsically infinite dimensional; functional principal component analysis reduces dimension to a finite level, and points to the most significant components of the data. However, although this technique is often discussed, its properties are not as well understood as they might be. We show how the properties of functional principal component analysis can be elucidated through stochastic expansions and related results. Our approach quantifies the errors that arise through statistical approximation, in successive terms of orders n-1/2, n-1, n-3/2,. . . , where n denotes sample size. The expansions show how spacings among eigenvalues impact on statistical performance. The term of size n1/2 illustrates first-order properties and leads directly to limit theory which describes the dominant effect of spacings. Thus, for example, spacings are seen to have an immediate, first-order effect on properties of eigen-function estimators, but only a second-order effect on eigenvalue estimators. Our results can be used to explore properties of existing methods, and also to suggest new techniques. In particular, we suggest bootstrap methods for constructing simultaneous confidence regions for an infinite number of eigenvalues, and also for individual eigenvalues and eigenvectors. © 2006 Royal Statistical Society.
CITATION STYLE
Hall, P., & Hosseini-Nasab, M. (2006). On properties of functional principal components analysis. Journal of the Royal Statistical Society. Series B: Statistical Methodology, 68(1), 109–126. https://doi.org/10.1111/j.1467-9868.2005.00535.x
Mendeley helps you to discover research relevant for your work.