Reduction of neural network circuits by constant and nearly constant signal propagation

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This work focuses on optimizing circuits representing neural networks (NNs) in the form of and-inverter graphs (AIGs). The optimization is done by analyzing the training set of the neural network to find constant bit values at the primary inputs. The constant values are then propagated through the AIG, which results in removing unnecessary nodes. Furthermore, a trade-off between neural network accuracy and its reduction due to constant propagation is investigated by replacing with constants those inputs that are likely to be zero or one. The experimental results show a significant reduction in circuit size with negligible loss in accuracy.

Cite

CITATION STYLE

APA

Berndt, A. A. S., Mishchenko, A., Butzen, P. F., & Reis, A. I. (2019). Reduction of neural network circuits by constant and nearly constant signal propagation. In Proceedings - 32nd Symposium on Integrated Circuits and Systems Design, SBCCI 2019. Association for Computing Machinery, Inc. https://doi.org/10.1145/3338852.3339874

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free