Linear regularized compression of deep convolutional neural networks

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In the last years, deep neural networks have revolutionized machine learning tasks. However, the design of deep neural network architectures is still based on try-and-error procedures, and they are usually complex models with high computational cost. This is the reason behind the efforts that are made in the deep learning community to create small and compact models with comparable accuracy to the current deep neural networks. In literature, different methods to reach this goal are presented; among them, techniques based on low rank factorization are used in order to compress pre trained models with the aim to provide a more compact version of them without losing their effectiveness. Despite their promising results, these techniques produce auxiliary structures between network layers; this work shows that is possible to overcome the need for such elements by using simple regularization techniques. We tested our approach on the VGG16 model obtaining a four times faster reduction without loss in accuracy and avoiding supplementary structures between the network layers.

Cite

CITATION STYLE

APA

Ceruti, C., Campadelli, P., & Casiraghi, E. (2017). Linear regularized compression of deep convolutional neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10484 LNCS, pp. 244–253). Springer Verlag. https://doi.org/10.1007/978-3-319-68560-1_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free