A Super-Vector Deep Learning Coprocessor with High Performance-Power Ratio

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The maturity of deep learning theory and the development of computers have made deep learning algorithms powerful tools for mining the underlying features of big data. There is an increasing demand of high-accuracy and real-time object detection for intelligent communication and control tasks in embedded systems. More energy efficient deep learning accelerators are required because of the limited battery and resources in embedded systems. We propose a Super-Vector Coprocessor architecture called SVP-DL. SVP-DL can process various matrix operations used in deep learning algorithms by calculating multidimensional vectors using specified vector and scalar instructions, which enabling flexible combinations of matrix operations and data organization. We verified SVP-DL on a self-developed field-programmable gate array (FPGA) platform. The typical deep belief network and the sparse coding network are programmed on the coprocessor. Experiments results showed that SVP-DL architecture on FPGA can achieve 1.7 to 2.1 times the performance under a low frequency compared with that on a PC platform. SVP-DL on FPGA can also achieve about 9 times the performance-power efficiency of a PC.

Author supplied keywords

Cite

CITATION STYLE

APA

Jiang, J., Liu, Z., Xu, J., & Hu, R. (2017). A Super-Vector Deep Learning Coprocessor with High Performance-Power Ratio. In Studies in Computational Intelligence (Vol. 710, pp. 81–92). Springer Verlag. https://doi.org/10.1007/978-3-319-56660-3_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free