Feature selection for identifying critical variables of principal components based on K-nearest neighbor rule

6Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Principal components analysis (PCA) is a popular linear feature extractor to unsupervised dimensionality reduction, and found in many branches of science including-examples in computer vision, text processing and bioinformatics, etc. However, axes of the lower-dimensional space, i.e., principal components, are a set of new variables carrying no clear physical meanings. Thus, interpretation of results obtained in the lower-dimensional PCA space and data acquisition for test samples still involve all of the original measurements. To select original features for identifying critical variables of principle components, we develop a new method with k-nearest neighbor clustering procedure and three new similarity measures to link the physically meaningless principal components back to a subset of original measurements. Experiments are conducted on benchmark data sets and face data sets with different poses, expressions, backgrounds and occlusions for gender classification to show their superiorities. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Li, Y., & Lu, B. L. (2007). Feature selection for identifying critical variables of principal components based on K-nearest neighbor rule. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4781 LNCS, pp. 193–204). Springer Verlag. https://doi.org/10.1007/978-3-540-76414-4_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free