We present the hardware implementation of partially connected neural network that is defined as an extended of the Multi-Layer Perceptron (MLP) model. We demonstrate that partially connected neural networks lead to a higher performance in terms of computing speed (requiring less memory and computing resources). This work addresses a complete study that compares the hardware implementation of MLP and a partially connected version (XMLP) in terms of computing speed, hardware resources and performance cost. Furthermore, we study also different memory management strategies for the connectivity patterns. © Springer-Verlag Berlin Heidelberg 2006.
CITATION STYLE
Ortigosa, E. M., Cañas, A., Rodríguez, R., Díaz, J., & Mota, S. (2006). Towards an optimal implementation of MLP in FPGA. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3985 LNCS, pp. 46–51). Springer Verlag. https://doi.org/10.1007/11802839_7
Mendeley helps you to discover research relevant for your work.