A hybrid GPU-FPGA based design methodology for enhancing machine learning applications performance

17Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The high-density computing requirements of machine learning (ML) is a challenging performance bottleneck. Limited by the sequential instruction execution system, traditional general purpose processors are not suitable for efficient ML. In this work, we present an ML system design methodology based on GPU and FPGA to tackle this problem. The core idea of our proposal is when designing an ML platform, we leverage the graphics processing unit (GPU)’s high-density computing to perform model training and exploit field programmable gate array (FPGA)’s low-latency to perform model inferencing. In between, we define amodel converter, which enable transforming the model used by the training module to one that is used by inferencing module. We evaluated our approach through two use cases. The first is a handwritten digit recognition with convolutional neural network while the second use case is for predicting data center’s power usage effectiveness with deep neural network regression algorithm. The experimental results indicate that our solution can take advantages of GPU and FPGA’s parallel computing capacity to improve the efficiency of training and inferencing significantly. Meanwhile, the solution preserves the accuracy and the mean square error while converting the models between the different frameworks.

Cite

CITATION STYLE

APA

Liu, X., Ounifi, H. A., Gherbi, A., Li, W., & Cheriet, M. (2020). A hybrid GPU-FPGA based design methodology for enhancing machine learning applications performance. Journal of Ambient Intelligence and Humanized Computing, 11(6), 2309–2323. https://doi.org/10.1007/s12652-019-01357-4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free