Neural Networks on an FPGA and Hardware-Friendly Activation Functions

  • Si J
  • Harris S
  • Yfantis E
N/ACitations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

This paper describes our implementation of several neural networks built on a field programmable gate array (FPGA) and used to recognize a handwritten digit dataset—the Modified National Institute of Standards and Technology (MNIST) database. We also propose a novel hardware-friendly activation function called the dynamic Rectified Linear Unit (ReLU)—D-ReLU function that achieves higher performance than traditional activation functions at no cost to accuracy. We built a 2-layer online training multilayer perceptron (MLP) neural network on an FPGA with varying data widths. Reducing the data width from 8 to 4 bits only reduces prediction accuracy by 11%, but the FPGA area decreases by 41%. Compared to networks that use the sigmoid function, our proposed D-ReLU function uses 24 - 41% less area with no loss to prediction accuracy. Further reducing the data width of the 3-layer networks from 8 to 4 bits, the prediction accuracies only decrease by 3 - 5%, with area being reduced by 9 - 28%. Moreover, FPGA solutions have 29 times faster execution time, even despite running at a 60× lower clock rate. Thus, FPGA implementations of neural networks offer a high-performance, low power alternative to traditional software methods, and our novel D-ReLU activation function offers additional improvements to performance and power saving.

Cite

CITATION STYLE

APA

Si, J., Harris, S. L., & Yfantis, E. (2020). Neural Networks on an FPGA and Hardware-Friendly Activation Functions. Journal of Computer and Communications, 08(12), 251–277. https://doi.org/10.4236/jcc.2020.812021

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free