A review of dimensionality reduction in high-dimensional data using multi-core and many-core architecture

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Data is growing. The growth is twofold – size and dimensionality. To deal with such a huge data – "the big data", researchers, data analysts are relying on the machine learning and data mining techniques. However, the performance of these techniques is degrading due to this twofold growth that further adds to the complexity of the data. The need of the hour is to leave up with the complexity of such a datasets and to focus on improving the accuracy of data mining and machine learning techniques as well as on enhancing the performance of the algorithms. The accuracy of the mining algorithms can be enhanced by reducing dimensionality of data. Not all information that contributes to the dimensionality of the datasets is important for the said techniques of data analysis – the dimensionality can be reduced. Contemporary research focuses on the techniques of removing unwanted, unnecessary, redundant information; importantly removing the data that adds up to dimensionality making it high dimensional. The performance of the algorithm is further upgraded with the help of the parallel computing on high-performance computing (HPC) infrastructure. Parallel computing on multi-core and many-core architectures, on the low-cost general purpose graphics processing unit (GPGPU) is a boon for data analysts, researchers for finding high-performance solutions. The GPGPU have gained popularity due to their cost benefits and very high data processing power. Also, parallel processing techniques achieve better speedup and scaleup. The objective of this paper is to present an insight for the researchers, data analysts on how the high dimensionality of the data can be dealt with so that the accuracy and computational complexity of the machine learning and data mining techniques is not compromised. To prove the point, this work discusses various parallel computing approaches on multi-core (CPU) and many-core architectures (GPGPU) for time complexity enhancement. Moreover, the contemporary dimensionality reduction methods are reviewed.

Cite

CITATION STYLE

APA

Patil, S. V., & Kulkarni, D. B. (2019). A review of dimensionality reduction in high-dimensional data using multi-core and many-core architecture. In Communications in Computer and Information Science (Vol. 964, pp. 54–63). Springer Verlag. https://doi.org/10.1007/978-981-13-7729-7_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free