Hybrid ADDer: A Viable Solution for Efficient Design of MAC in DNNs

9Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This research article proposes a solution for efficient hardware implementation of deep neural networks (DNNs) in Edge-AI applications. An effective Hybrid ADDer (HADD) block for accumulation in fixed-point multiply-accumulate (MAC) operation is developed to overcome area and power limitations. The proposed HADD design offers a considerable reduction in area and power consumption, with a tolerable accuracy loss and reduced latency. The inference results show an accuracy of 96.97 and 96.64% for MNIST and A-Z Handwritten Alphabet datasets, respectively, using the LeNet-5 DNN model. Compared to the conventional adder implementation, the proposed HADD design reduces area utilization by 44% and power consumption by 51%, with a reduction in delay of 19% for 8-bit precision at 180 nm. For the same bit precision, the proposed design reduces area by 31%, power consumption by 34%, and delay by 8.1% at 45 nm. The proposed design further investigates edge detection applications, and the results for different standard images were promising. Overall, the proposed accumulator arithmetic block is a viable solution for error-tolerant AI applications, including DNN for image classification, object recognition, and other image-processing applications.

Cite

CITATION STYLE

APA

Trivedi, V., Lalwani, K., Raut, G., Khomane, A., Ashar, N., & Vishvakarma, S. K. (2023). Hybrid ADDer: A Viable Solution for Efficient Design of MAC in DNNs. Circuits, Systems, and Signal Processing, 42(12), 7596–7614. https://doi.org/10.1007/s00034-023-02469-1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free