Efficient Accuracy Recovery in Approximate Neural Networks by Systematic Error Modelling

8Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Approximate Computing is a promising paradigm for mitigating the computational demands of Deep Neural Networks (DNNs), by leveraging DNN performance and area, throughput or power. The DNN accuracy, affected by such approximations, can be then effectively improved through retraining. In this paper,we present a novel methodology for modelling the approximation error introduced by approximate hardware in DNNs, which accelerates retraining and achieves negligible accuracy loss. To this end, we implement the behavioral simulation of several approximate multipliers and model the error generated by such approximations on pre-trained DNNs for image classification on CIFAR10 and ImageNet. Finally, we optimize the DNN parameters by applying our error model during DNN retraining, to recover the accuracy lost due to approximations. Experimental results demonstrate the efficiency of our proposed method for accelerated retraining (11 faster for CIFAR10 and 8 faster for ImageNet) for full DNN approximation, which allows us to deploy approximate multipliers with energy savings of up to 36% for 8-bit precision DNNs with an accuracy loss lower than 1%.

Cite

CITATION STYLE

APA

Parra, C. D. L., Guntoro, A., & Kumar, A. (2021). Efficient Accuracy Recovery in Approximate Neural Networks by Systematic Error Modelling. In Proceedings of the Asia and South Pacific Design Automation Conference, ASP-DAC (pp. 365–371). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1145/3394885.3431533

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free