A general class of neural networks for principal component analysis and factor analysis

4Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We review a recently proposed family of functions for finding principal and minor components of a data set. We extend the family so that the Principal Subspace of the data set is found by using a method similar to that known as the Bigradient algorithm. We then amend the method in a way which was shown to change a Principal Component Analysis (PCA) rule to a rule for performing Factor Analysis (FA) and show its power on a standard problem. We find in both cases that, whereas the one Principal Component family all have similar convergence and stability properties, the multiple output networks for both PCA and FA have different properties.

Cite

CITATION STYLE

APA

Han, Y., & Fyfe, C. (2000). A general class of neural networks for principal component analysis and factor analysis. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1983, pp. 158–163). Springer Verlag. https://doi.org/10.1007/3-540-44491-2_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free