Latent Space Explanation by Intervention

5Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

The success of deep neural nets heavily relies on their ability to encode complex relations between their input and their output. While this property serves to fit the training data well, it also obscures the mechanism that drives prediction. This study aims to reveal hidden concepts by employing an intervention mechanism that shifts the predicted class based on discrete variational autoencoders. An explanatory model then visualizes the encoded information from any hidden layer and its corresponding intervened representation. By the assessment of differences between the original representation and the intervened representation, one can determine the concepts that can alter the class, hence providing interpretability. We demonstrate the effectiveness of our approach on CelebA, where we show various visualizations for bias in the data and suggest different interventions to reveal and change bias.

Cite

CITATION STYLE

APA

Gat, I., Lorberbom, G., Schwartz, I., & Hazan, T. (2022). Latent Space Explanation by Intervention. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (Vol. 36, pp. 679–687). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v36i1.19948

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free