In recent years predictive models based on Deep Learning strategies have achieved enormous success in several domains including pattern recognition tasks, language translation, software design, etc. Deep learning uses a combination of techniques to achieve its prediction accuracy, but essentially all existing approaches are based on multi-layer neural networks with deep architectures, i.e., several layers of processing units containing a large number of neurons. As the simulation of large networks requires heavy computational power, GPUs and cluster based computation strategies have been successfully used. In this work, a layer multiplexing scheme is presented in order to permit the simulation of deep neural networks in FPGA boards. As a demonstration of the usefulness of the scheme deep architectures trained by the classical Back-Propagation algorithm are simulated on FPGA boards and compared to standard implementations, showing the advantages in computation speed of the proposed scheme.
CITATION STYLE
Ortega-Zamorano, F., Jerez, J. M., Gómez, I., & Franco, L. (2016). Deep neural network architecture implementation on FPGAs using a layer multiplexing scheme. In Advances in Intelligent Systems and Computing (Vol. 474, pp. 79–86). Springer Verlag. https://doi.org/10.1007/978-3-319-40162-1_9
Mendeley helps you to discover research relevant for your work.