Understanding deep neural network by filter sensitive area generation network

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep convolutional networks have recently gained much attention because of their impressive performance on some visual tasks. However, it is still not clear why they achieve such great success. In this paper, a novel approach called Filter Sensitive Area Generation Network (FSAGN), has been proposed to interpret what the convolutional filters have learnt after training CNNs. Given any trained CNN model, the proposed method aims to figure out which object part each filter represents in a high conv-layer, through appropriate input image mask which filters out unrelated area. In order to obtain such a mask, a mask generation network is designed and the corresponding loss function is defined to evaluate the changes of feature maps before and after mask operation. Experiments on multiple datasets and networks show that FSAGN clarifies the knowledge representations of each filter and how small disturbance on specific object parts affects the performance of CNNs.

Cite

CITATION STYLE

APA

Qian, Y., Qiao, H., & Xu, J. (2018). Understanding deep neural network by filter sensitive area generation network. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11301 LNCS, pp. 192–203). Springer Verlag. https://doi.org/10.1007/978-3-030-04167-0_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free