A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration

114Citations
Citations of this article
103Readers
Mendeley users who have this article in their library.

Abstract

Over the past decade, deep-learning-based representations have demonstrated remarkable performance in academia and industry. The learning capability of convolutional neural networks (CNNs) originates from a combination of various feature extraction layers that fully utilize a large amount of data. However, they often require substantial computation and memory resources while replacing traditional hand-engineered features in existing systems. In this review, to improve the efficiency of deep learning research, we focus on three aspects: quantized/binarized models, optimized architectures, and resource-constrained systems. Recent advances in light-weight deep learning models and network architecture search (NAS) algorithms are reviewed, starting with simplified layers and efficient convolution and including new architectural design and optimization. In addition, several practical applications of efficient CNNs have been investigated using various types of hardware architectures and platforms.

Cite

CITATION STYLE

APA

Ghimire, D., Kil, D., & Kim, S. H. (2022, March 1). A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration. Electronics (Switzerland). MDPI. https://doi.org/10.3390/electronics11060945

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free