Abstract
Computing‐In‐Memory (CIM), based on non‐von Neumann architecture, has lately received significant attention due to its lower overhead in delay and higher energy efficiency in convolutional and fully‐connected neural network computing. Growing works have given the priority to researching the array of memory and peripheral circuits to achieve multiply‐and-accumulate (MAC) operation, but not enough attention has been paid to the high‐precision hardware implementation of non‐linear layers up to now, which still causes time overhead and power consumption. Sigmoid is a widely used non‐linear activation function and most of its studies provided an approximation of the function expression rather than totally matched, inevitably leading to considerable error. To address this issue, we propose a high‐precision circuit implementation of the sigmoid, matching the expression exactly for the first time. The simulation results with the SMIC 40 nm process suggest that the proposed circuit implemented high‐precision sigmoid perfectly achieves the properties of the ideal sigmoid, showing the maximum error and average error between the proposed simulated sigmoid and ideal sigmoid is 2.74% and 0.21%, respectively. In addition, a multi‐layer convolutional neural network based on CIM architecture employing the simulated high‐precision sigmoid activation function verifies the similar recognition accuracy on the test database of handwritten digits compared to utilize the ideal sigmoid in software, with online training achieving 97.06% and with offline training achieving 97.74%.
Author supplied keywords
Cite
CITATION STYLE
Xu, S., Li, X., Xie, C., Chen, H., Chen, C., & Song, Z. (2021). A high‐precision implementation of the sigmoid activation function for computing‐in‐memory architecture. Micromachines, 12(10). https://doi.org/10.3390/mi12101183
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.