Layer-Wise Training to Create Efficient Convolutional Neural Networks

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent large CNNs have delivered impressive performance but their storage requirement and computational cost limit a wide range of their applications in mobile devices and large-scale Internet industry. Works focusing on storage compression have led a great success. Recently how to reduce computational cost draws more attention. In this paper, we propose an algorithm to reduce computational cost, which is often solved by sparsification and matrix decomposition methods. Since the computation is dominated by the convolutional operations, we focus on the compression of convolutional layers. Unlike sparsification and matrix decomposition methods which usually derive from mathematics, we receive inspiration from transfer learning and biological neural networks. We transfer the knowledge in state-of-the-art large networks to compressed small ones, via layer-wise training. We replace the complex convolutional layers in large networks with more efficient modules and keep their outputs in each-layer consistent. Modules in the compressed small networks are more efficient, and their design draws on biological neural networks. For AlexNet model, we achieve 3.62× speedup, with 0.11% top-5 error rate increase. For VGG model, we achieve 5.67× speedup, with 0.43% top-5 error rate increase.

Cite

CITATION STYLE

APA

Zeng, L., & Tian, X. (2017). Layer-Wise Training to Create Efficient Convolutional Neural Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10635 LNCS, pp. 631–641). Springer Verlag. https://doi.org/10.1007/978-3-319-70096-0_65

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free