On the arithmetic precision for implementing back-propagation networks on FPGA: A case study

18Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Artificial Neural Networks (ANNs) are inherently parallel architectures which represent a natural fit for custom implementation on FPGAs. One important implementation issue is to determine the numerical precision format that allows an optimum tradeoff between precision and implementation areas. Standard single or double precision floating-point representations minimize quantization errors while requiring significant hardware resources. Less precise fixed-point representation may require less hardware resources but add quantization errors that may prevent learning from taking place, especially in regression problems. This chapter examines this issue and reports on a recent experiment where we implemented a Multi-layer perceptron (MLP) on an FPGA using both fixed and floating point precision. Results show that the fixed-point MLP implementation was over 12x greater in speed, over 13x smaller in area, and achieves far greater processing density compared to the floating-point FPGA-based MLP. © 2006 Springer.

Cite

CITATION STYLE

APA

Moussa, M., Areibi, S., & Nichols, K. (2006). On the arithmetic precision for implementing back-propagation networks on FPGA: A case study. In FPGA Implementations of Neural Networks (pp. 37–61). Springer US. https://doi.org/10.1007/0-387-28487-7_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free