An empirical comparison of dimensionality reduction techniques for pattern classification

6Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

To some extent or other all classifiers are subject to the curse of dimensionality. Consequently, pattern classification is often preceded with finding a reduced dimensional representation of the patterns. In this paper we empirically compare the performance of unsupervised and supervised dimensionality reduction techniques. The data set we consider is obtained by segmenting cells in cytological preparations and extracting 9 features from each of the cells. We evaluate the performance of 4 dimensionality reduction techniques (2 unsupervised) and (2 supervised) with and without noise. The unsupervised techniques include principal component analysis and self-organizing feature maps, while the supervised techniques include Fisher's linear discriminants and multi-layered feed-forward neural networks. Our results on a real world data set indicate that multi-layered feed-forward neural networks outperform the other three dimensionality reduction techniques and that all techniques are sensitive to noise.

Cite

CITATION STYLE

APA

Balachander, T., Kothari, R., & Cualing, H. (1997). An empirical comparison of dimensionality reduction techniques for pattern classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1327, pp. 589–594). Springer Verlag. https://doi.org/10.1007/bfb0020218

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free