Less is more: Towards compact CNNs

220Citations
Citations of this article
153Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

To attain a favorable performance on large-scale datasets, convolutional neural networks (CNNs) are usually designed to have very high capacity involving millions of parameters. In this work, we aim at optimizing the number of neurons in a network, thus the number of parameters. We show that, by incorporating sparse constraints into the objective function, it is possible to decimate the number of neurons during the training stage. As a result, the number of parameters and the memory footprint of the neural network are also reduced, which is also desirable at the test time. We evaluated our method on several well-known CNN structures including AlexNet, and VGG over different datasets including ImageNet. Extensive experimental results demonstrate that our method leads to compact networks. Taking first fully connected layer as an example, our compact CNN contains only 30% of the original neurons without any degradation of the top-1 classification accuracy.

Cite

CITATION STYLE

APA

Zhou, H., Alvarez, J. M., & Porikli, F. (2016). Less is more: Towards compact CNNs. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9908 LNCS, pp. 662–677). Springer Verlag. https://doi.org/10.1007/978-3-319-46493-0_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free