DeepCompNet: A Novel Neural Net Model Compression Architecture

13Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The emergence of powerful deep learning architectures has resulted in breakthrough innovations in several fields such as healthcare, precision farming, banking, education, and much more. Despite the advantages, there are limitations in deploying deep learning models in resource-constrained devices due to their huge memory size. This research work reports an innovative hybrid compression pipeline for compressing neural networks exploiting the untapped potential of z-score in weight pruning, followed by quantization using DBSCAN clustering and Huffman encoding. The proposed model has been experimented with state-of-the-art LeNet Deep Neural Network architectures using the standard MNIST and CIFAR datasets. Experimental results prove the compression performance of DeepCompNet by 26x without compromising the accuracy. The synergistic blend of the compression algorithms in the proposed model will ensure effortless deployment of neural networks leveraging DL applications in memory-constrained devices.

Cite

CITATION STYLE

APA

Mary Shanthi Rani, M., Chitra, P., Lakshmanan, S., Kalpana Devi, M., Sangeetha, R., & Nithya, S. (2022). DeepCompNet: A Novel Neural Net Model Compression Architecture. Computational Intelligence and Neuroscience, 2022. https://doi.org/10.1155/2022/2213273

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free