Less is more: Towards compact CNNs

228Citations
Citations of this article
153Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

To attain a favorable performance on large-scale datasets, convolutional neural networks (CNNs) are usually designed to have very high capacity involving millions of parameters. In this work, we aim at optimizing the number of neurons in a network, thus the number of parameters. We show that, by incorporating sparse constraints into the objective function, it is possible to decimate the number of neurons during the training stage. As a result, the number of parameters and the memory footprint of the neural network are also reduced, which is also desirable at the test time. We evaluated our method on several well-known CNN structures including AlexNet, and VGG over different datasets including ImageNet. Extensive experimental results demonstrate that our method leads to compact networks. Taking first fully connected layer as an example, our compact CNN contains only 30% of the original neurons without any degradation of the top-1 classification accuracy.

References Powered by Scopus

ImageNet: A Large-Scale Hierarchical Image Database

52144Citations
N/AReaders
Get full text

Gradient-based learning applied to document recognition

44615Citations
N/AReaders
Get full text

ImageNet Large Scale Visual Recognition Challenge

30812Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Model Compression and Acceleration for Deep Neural Networks: The Principles, Progress, and Challenges

918Citations
N/AReaders
Get full text

Pruning and quantization for deep neural network acceleration: A survey

524Citations
N/AReaders
Get full text

Recent advances in convolutional neural network acceleration

315Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Zhou, H., Alvarez, J. M., & Porikli, F. (2016). Less is more: Towards compact CNNs. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9908 LNCS, pp. 662–677). Springer Verlag. https://doi.org/10.1007/978-3-319-46493-0_40

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 81

74%

Researcher 21

19%

Professor / Associate Prof. 6

5%

Lecturer / Post doc 2

2%

Readers' Discipline

Tooltip

Computer Science 87

74%

Engineering 25

21%

Mathematics 3

3%

Physics and Astronomy 2

2%

Save time finding and organizing research with Mendeley

Sign up for free