Rationale Discovery and Explainable AI

8Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

The justification of an algorithm's outcomes is important in many domains, and in particular in the law. However, previous research has shown that machine learning systems can make the right decisions for the wrong reasons: despite high accuracies, not all of the conditions that define the domain of the training data are learned. In this study, we investigate what the system does learn, using state-of-the-art explainable AI techniques. With the use of SHAP and LIME, we are able to show which features impact the decision making process and how the impact changes with different distributions of the training data. However, our results also show that even high accuracy and good relevant feature detection are no guarantee for a sound rationale. Hence these state-of-the-art explainable AI techniques cannot be used to fully expose unsound rationales, further advocating the need for a separate method for rationale evaluation.

Cite

CITATION STYLE

APA

Steging, C., Renooij, S., & Verheij, B. (2021). Rationale Discovery and Explainable AI. In Frontiers in Artificial Intelligence and Applications (Vol. 346, pp. 225–234). IOS Press BV. https://doi.org/10.3233/FAIA210341

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free