Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach

51Citations
Citations of this article
75Readers
Mendeley users who have this article in their library.

Abstract

Interpretable machine learning has gained much attention recently. Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system. However, existing interpretable machine learning methods fail to consider briefness and comprehensiveness simultaneously, leading to redundant explanations. We propose the variational information bottleneck for interpretation, VIBI, a system-agnostic interpretable method that provides a brief but comprehensive explanation. VIBI adopts an information theoretic principle, information bottleneck principle, as a criterion for finding such explanations. For each instance, VIBI selects key features that are maximally compressed about an input (briefness), and informative about a decision made by a black-box system on that input (comprehensive). We evaluate VIBI on three datasets and compare with state-of-the-art interpretable machine learning methods in terms of both interpretability and fidelity evaluated by human and quantitative metrics.

Cite

CITATION STYLE

APA

Bang, S., Xie, P., Lee, H., Wu, W., & Xing, E. (2021). Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 13A, pp. 11396–11404). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i13.17358

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free