Convolutional neural networks (CNNs) have been successfully used in a range of tasks. However, CNNs are often viewed as “black-box” and lack of interpretability. One main reason is due to the filter-class entanglement – an intricate many-to-many correspondence between filters and classes. Most existing works attempt post-hoc interpretation on a pre-trained model, while neglecting to reduce the entanglement underlying the model. In contrast, we focus on alleviating filter-class entanglement during training. Inspired by cellular differentiation, we propose a novel strategy to train interpretable CNNs by encouraging class-specific filters, among which each filter responds to only one (or few) class. Concretely, we design a learnable sparse Class-Specific Gate (CSG) structure to assign each filter with one (or few) class in a flexible way. The gate allows a filter’s activation to pass only when the input samples come from the specific class. Extensive experiments demonstrate the fabulous performance of our method in generating a sparse and highly class-related representation of the input, which leads to stronger interpretability. Moreover, comparing with the standard training strategy, our model displays benefits in applications like object localization and adversarial sample detection. Code link: https://github.com/hyliang96/CSGCNN.
CITATION STYLE
Liang, H., Ouyang, Z., Zeng, Y., Su, H., He, Z., Xia, S. T., … Zhang, B. (2020). Training Interpretable Convolutional Neural Networks by Differentiating Class-Specific Filters. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12347 LNCS, pp. 622–638). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58536-5_37
Mendeley helps you to discover research relevant for your work.