Significance of Dimensionality Reduction in Image Processing

  • V. B S
  • M. David J
N/ACitations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

The aim of this paper is to present a comparative study of two linear dimension reduction methods namely PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The main idea of PCA is to transform the high dimensional input space onto the feature space where the maximal variance is displayed. The feature selection in traditional LDA is obtained by maximizing the difference between classes and minimizing the distance within classes. PCA finds the axes with maximum variance for the whole data set where LDA tries to find the axes for best class seperability. The neural network is trained about the reduced feature set (using PCA or LDA) of images in the database for fast searching of images from the database using back propagation algorithm. The proposed method is experimented over a general image database using Matlab. The performance of these systems has been evaluated by Precision and Recall measures. Experimental results show that PCA gives the better performance in terms of higher precision and recall values with lesser computational complexity than LDA.

Cite

CITATION STYLE

APA

V. B, S., & M. David, J. (2015). Significance of Dimensionality Reduction in Image Processing. Signal & Image Processing : An International Journal, 6(3), 27–42. https://doi.org/10.5121/sipij.2015.6303

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free