Nowadays, convolutional neural networks (CNN) play a major role in image processing tasks like image classification, object detection, semantic segmentation. Very often CNN networks have from several to hundred stacked layers with several megabytes of weights. One of the possible techniques to reduce complexity and memory footprint is pruning. Pruning is a process of removing weights which connect neurons from two adjacent layers in the network. The process of finding near optimal solution with specified and acceptable drop in accuracy can be more sophisticated when DL model has higher number of convolutional layers. In the paper few approaches based on retraining and no retraining are described and compared together.
CITATION STYLE
Pietron, M., & Wielgosz, M. (2020). Retrain or not retrain? - efficient pruning methods of deep cnn networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12139 LNCS, pp. 452–463). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-50420-5_34
Mendeley helps you to discover research relevant for your work.