A survey on the cures for the curse of dimensionality in big data

4Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Dimensionality reduction techniques are used to reduce the complexity for analysis of high-dimensional data sets. The raw input data set may have large dimensions, and it might consume time and lead to wrong predictions if unnecessary data attributes are been considered for analysis. Hence, using dimensionality reduction techniques, one can reduce the dimensions of input data toward accurate prediction with less cost. In this paper, the different machine learning approaches used for dimensionality reductions such as principal component analysis (PCA), singular value decomposition, linear discriminant analysis, Kernel PCA, and artificial neural network have been studied.

Cite

CITATION STYLE

APA

Remesh, R., & Pattabiraman, V. (2017). A survey on the cures for the curse of dimensionality in big data. Asian Journal of Pharmaceutical and Clinical Research, 10, 355–360. https://doi.org/10.22159/ajpcr.2017.v10s1.19755

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free