Reconfigurable MAC-Based architecture for parallel hardware implementation on FPGAs of artificial neural networks using fractional fixed point representation

3Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we devise a hardware architecture for ANNs that takes advantage of the dedicated adder blocks, commonly called MACs, to compute both the weighted sum and the activation function. The proposed architecture requires a reduced silicon area considering the fact that the MACs come for free as these are FPGA's built-in cores. The implementation uses integer fixed point arithmetic and operates with fractions to represent real numbers. The hardware is fast because it is massively parallel. Besides, the proposed architecture can adjust itself on-the-fly to the user-defined configuration of the neural network, i.e., the number of layers and neurons per layer of the ANN can be settled with no extra hardware changes. © 2009 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Da Silva, R. M., Nedjah, N., & De MacEdo Mourelle, L. (2009). Reconfigurable MAC-Based architecture for parallel hardware implementation on FPGAs of artificial neural networks using fractional fixed point representation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5768 LNCS, pp. 475–484). https://doi.org/10.1007/978-3-642-04274-4_50

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free