Deep neural network architecture implementation on FPGAs using a layer multiplexing scheme

3Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In recent years predictive models based on Deep Learning strategies have achieved enormous success in several domains including pattern recognition tasks, language translation, software design, etc. Deep learning uses a combination of techniques to achieve its prediction accuracy, but essentially all existing approaches are based on multi-layer neural networks with deep architectures, i.e., several layers of processing units containing a large number of neurons. As the simulation of large networks requires heavy computational power, GPUs and cluster based computation strategies have been successfully used. In this work, a layer multiplexing scheme is presented in order to permit the simulation of deep neural networks in FPGA boards. As a demonstration of the usefulness of the scheme deep architectures trained by the classical Back-Propagation algorithm are simulated on FPGA boards and compared to standard implementations, showing the advantages in computation speed of the proposed scheme.

Cite

CITATION STYLE

APA

Ortega-Zamorano, F., Jerez, J. M., Gómez, I., & Franco, L. (2016). Deep neural network architecture implementation on FPGAs using a layer multiplexing scheme. In Advances in Intelligent Systems and Computing (Vol. 474, pp. 79–86). Springer Verlag. https://doi.org/10.1007/978-3-319-40162-1_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free