Counterfactual Explanations in Explainable AI: A Tutorial

4Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep learning has shown powerful performances in many fields, however its black-box nature hinders its further applications. In response, explainable artificial intelligence emerges, aiming to explain the predictions and behaviors of deep learning models. Among many explanation methods, counterfactual explanation has been identified as one of the best methods due to its resemblance to human cognitive process: to deliver an explanation by constructing a contrastive situation so that human may interpret the underlying mechanism by cognitively demonstrating the difference. In this tutorial, we will introduce the cognitive concept and characteristics of counterfactual explanation, its computational form, mainstream methods, and various adaptation in terms of different explanation settings. In addition, we will demonstrate several typical use cases of counterfactual explanations in popular research areas. Finally, in light of practice, we outline the potential applications of counterfactual explanations like data augmentation or conversation system. We hope this tutorial can help the participants get an overview sense of counterfactual explanations.

Cite

CITATION STYLE

APA

Wang, C., Li, X. H., Han, H., Wang, S., Wang, L., Cao, C. C., & Chen, L. (2021). Counterfactual Explanations in Explainable AI: A Tutorial. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 4080–4081). Association for Computing Machinery. https://doi.org/10.1145/3447548.3470797

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free