Multi-class Support Vector Machine Training and Classification Based on MPI-GPU Hybrid Parallel Architecture

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Machine Learning (ML) is the process of extracting knowledge from current information to enable machine to predict new information based on the learned knowledge. Many ML algorithms aim at improving the learning process. Support vector machine (SVM) is one of the best classifiers for hyper-spectral images. As many of the ML algorithms, SVM training require a high computational cost that considered a very large quadratic programming optimization problem. The proposed sequential minimal optimization solve the highly computational problems using a hybrid parallel model that employs both graphical processing unit to implement binary-classifier and message passing interface to solve multi-class on “one-against-one” method. Our hybrid implementation achieves a speed up of 40X over the sequential (LIBSVM), a speed up of 7.5X over the CUDA-OPENMP for training dataset of 44442 records and 102 features size for 9 classes and a speed up of 13.7X over LIBSVM in classification process for 60300 records.

Cite

CITATION STYLE

APA

Elgarhy, I., Khaled, H., Gohary, R. E., & Faheem, H. M. (2019). Multi-class Support Vector Machine Training and Classification Based on MPI-GPU Hybrid Parallel Architecture. In Advances in Intelligent Systems and Computing (Vol. 845, pp. 179–188). Springer Verlag. https://doi.org/10.1007/978-3-319-99010-1_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free