Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors

34Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.

Abstract

Convolutional neural network (CNN) models for computer vision are powerful but lack explainability in their most basic form. This deficiency remains a key challenge when applying CNNs in important domains. Recent work on explanations through feature importance of approximate linear models has moved from input-level features (pixels or segments) to features from mid-layer feature maps in the form of concept activation vectors (CAVs). CAVs contain concept-level information and could be learned via clustering. In this work, we rethink the ACE algorithm of Ghorbani et al., proposing an alternative invertible concept-based explanation (ICE) framework to overcome its shortcomings. Based on the requirements of fidelity (approximate models to target models) and interpretability (being meaningful to people), we design measurements and evaluate a range of matrix factorization methods with our framework. We find that non-negative concept activation vectors (NCAVs) from non-negative matrix factorization provide superior performance in interpretability and fidelity based on computational and human subject experiments. Our framework provides both local and global concept-level explanations for pre-trained CNN models.

Cite

CITATION STYLE

APA

Zhang, R., Madumal, P., Miller, T., Ehinger, K. A., & Rubinstein, B. I. P. (2021). Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 13A, pp. 11682–11690). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i13.17389

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free