Accelerating Deep Convnets via Sparse Subspace Clustering

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

While the research on convolutional neural networks (CNNs) is progressing quickly, the real-world deployment of these models is often limited by computing resources and memory constraints. In this paper, we address this issue by proposing a novel filter pruning method to compress and accelerate CNNs. Our method reduces the redundancy in one convolutional layer by applying sparse subspace clustering to its output feature maps. In this way, most of the representative information in the network can be retained in each cluster. Therefore, our method provides an effective solution to filter pruning for which most existing methods directly remove filters based on simple heuristics. The proposed method is independent of the network structure, and thus it can be adopted by any off-the-shelf deep learning libraries. Evaluated on VGG-16 and ResNet-50 using ImageNet, our method outperforms existing techniques before fine-tuning, and achieves state-of-the-art results after fine-tuning.

Cite

CITATION STYLE

APA

Wang, D., Shi, S., Bai, X., & Zhang, X. (2019). Accelerating Deep Convnets via Sparse Subspace Clustering. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11902 LNCS, pp. 595–606). Springer. https://doi.org/10.1007/978-3-030-34110-7_50

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free