The dangers of post-hoc interpretability: Unjustified counterfactual explanations

99Citations
Citations of this article
150Readers
Mendeley users who have this article in their library.

Abstract

Post-hoc interpretability approaches have been proven to be powerful tools to generate explanations for the predictions made by a trained black-box model. However, they create the risk of having explanations that are a result of some artifacts learned by the model instead of actual knowledge from the data. This paper focuses on the case of counterfactual explanations and asks whether the generated instances can be justified, i.e. continuously connected to some ground-truth data. We evaluate the risk of generating unjustified counterfactual examples by investigating the local neighborhoods of instances whose predictions are to be explained and show that this risk is quite high for several datasets. Furthermore, we show that most state of the art approaches do not differentiate justified from unjustified counterfactual examples, leading to less useful explanations.

Cite

CITATION STYLE

APA

Laugel, T., Lesot, M. J., Marsala, C., Renard, X., & Detyniecki, M. (2019). The dangers of post-hoc interpretability: Unjustified counterfactual explanations. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 2801–2807). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/388

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free