Low-Power Hardware Accelerator for Sparse Matrix Convolution in Deep Neural Network

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep Neural Networks (DNN) have reached an outstanding accuracy in the past years, often going beyond human abilities. Nowadays, DNNs are widely used in many Artificial Intelligence (AI) applications such as computer vision, natural language processing and autonomous driving. However, these incredible performance come at a high computational cost, requiring complex hardware platforms. Therefore, the need for dedicated hardware accelerators able to drastically speed up the execution by preserving a low-power attitude arise. This paper presents innovative techniques able to tackle matrix sparsity in convolutional DNNs due to non-linear activation functions. Developed architectures allow to skip unnecessary operations, like zero multiplications, without sacrificing accuracy or throughput and improving the energy efficiency. Such improvement could enhance the performance of embedded limited-budget battery applications, where cost-effective hardware, accuracy and duration are critical to expanding the deployment of AI.

Cite

CITATION STYLE

APA

Anzalone, E., Capra, M., Peloso, R., Martina, M., & Masera, G. (2021). Low-Power Hardware Accelerator for Sparse Matrix Convolution in Deep Neural Network. In Smart Innovation, Systems and Technologies (Vol. 184, pp. 79–89). Springer. https://doi.org/10.1007/978-981-15-5093-5_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free