Adversarial structured neural network pruning

3Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

In recent years, convolutional neural networks (CNN) have been successfully employed for performing various tasks due to their high capacity. However, just like a double-edged sword, high capacity results from millions of parameters, which also brings a huge amount of redundancy and dramatically increases the computational complexity. The task of pruning a pretrained network to make it thinner and easier to deploy on resource-limited devices is still challenging. In this paper, we employ the idea of adversarial examples to sparsify a CNN. Adversarial examples were originally designed to fool a network. Rather than adjusting the input image, we view any layer as an input to the layers afterwards. By performing an adversarial attack algorithm, the sensitivity information of the network components could be observed. With this information, we perform pruning in a structured manner to retain only the most critical channels. Empirical evaluations show that our proposed approach obtains the state-of-the-art structured pruning performance.

Cite

CITATION STYLE

APA

Cai, X., Yi, J., Zhang, F., & Rajasekaran, S. (2019). Adversarial structured neural network pruning. In International Conference on Information and Knowledge Management, Proceedings (pp. 2433–2436). Association for Computing Machinery. https://doi.org/10.1145/3357384.3358150

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free