Interpretable Compositional Convolutional Neural Networks

26Citations
Citations of this article
52Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The reasonable definition of semantic interpretability presents the core challenge in explainable AI. This paper proposes a method to modify a traditional convolutional neural network (CNN) into an interpretable compositional CNN, in order to learn filters that encode meaningful visual patterns in intermediate convolutional layers. In a compositional CNN, each filter is supposed to consistently represent a specific compositional object part or image region with a clear meaning. The compositional CNN learns from image labels for classification without any annotations of parts or regions for supervision. Our method can be broadly applied to different types of CNNs. Experiments have demonstrated the effectiveness of our method. The code will be released when the paper is accepted.

Cite

CITATION STYLE

APA

Shen, W., Wei, Z., Huang, S., Zhang, B., Fan, J., Zhao, P., & Zhang, Q. (2021). Interpretable Compositional Convolutional Neural Networks. In IJCAI International Joint Conference on Artificial Intelligence (pp. 2971–2978). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/409

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free