Compression of the deep neural networks is a critical problem area when it comes to enhancing the capability of embedded devices. As deep neural networks are space and compute-intensive, they are generally unsuitable for use in edge devices and thereby lose their ubiquity. This paper discusses novel methods of neural network pruning, making them lighter, faster, and immune to noise and over-fitting without compromising the accuracy of the models. It poses two questions about the accepted methods of pruning and proffers two new strategies - evolution of weights and smart pruning to compress the deep neural networks better. These methods are then compared with the standard pruning mechanism on benchmark data sets to establish their efficiency. The code is made available online for public use.
CITATION STYLE
Islam, A., & Belhaouari, S. B. (2023). Smart Pruning of Deep Neural Networks Using Curve Fitting and Evolution of Weights. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13811 LNCS, pp. 62–76). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-25891-6_6
Mendeley helps you to discover research relevant for your work.