Unjustified Classification Regions and Counterfactual Explanations in Machine Learning

14Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Post-hoc interpretability approaches, although powerful tools to generate explanations for predictions made by a trained black-box model, have been shown to be vulnerable to issues caused by lack of robustness of the classifier. In particular, this paper focuses on the notion of explanation justification, defined as connectedness to ground-truth data, in the context of counterfactuals. In this work, we explore the extent of the risk of generating unjustified explanations. We propose an empirical study to assess the vulnerability of classifiers and show that the chosen learning algorithm heavily impacts the vulnerability of the model. Additionally, we show that state-of-the-art post-hoc counterfactual approaches can minimize the impact of this risk by generating less local explanations (Source code available at: https://github.com/thibaultlaugel/truce).

Cite

CITATION STYLE

APA

Laugel, T., Lesot, M. J., Marsala, C., Renard, X., & Detyniecki, M. (2020). Unjustified Classification Regions and Counterfactual Explanations in Machine Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11907 LNAI, pp. 37–54). Springer. https://doi.org/10.1007/978-3-030-46147-8_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free