Towards an optimal implementation of MLP in FPGA

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present the hardware implementation of partially connected neural network that is defined as an extended of the Multi-Layer Perceptron (MLP) model. We demonstrate that partially connected neural networks lead to a higher performance in terms of computing speed (requiring less memory and computing resources). This work addresses a complete study that compares the hardware implementation of MLP and a partially connected version (XMLP) in terms of computing speed, hardware resources and performance cost. Furthermore, we study also different memory management strategies for the connectivity patterns. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Ortigosa, E. M., Cañas, A., Rodríguez, R., Díaz, J., & Mota, S. (2006). Towards an optimal implementation of MLP in FPGA. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3985 LNCS, pp. 46–51). Springer Verlag. https://doi.org/10.1007/11802839_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free