The justification of an algorithm's outcomes is important in many domains, and in particular in the law. However, previous research has shown that machine learning systems can make the right decisions for the wrong reasons: despite high accuracies, not all of the conditions that define the domain of the training data are learned. In this study, we investigate what the system does learn, using state-of-the-art explainable AI techniques. With the use of SHAP and LIME, we are able to show which features impact the decision making process and how the impact changes with different distributions of the training data. However, our results also show that even high accuracy and good relevant feature detection are no guarantee for a sound rationale. Hence these state-of-the-art explainable AI techniques cannot be used to fully expose unsound rationales, further advocating the need for a separate method for rationale evaluation.
CITATION STYLE
Steging, C., Renooij, S., & Verheij, B. (2021). Rationale Discovery and Explainable AI. In Frontiers in Artificial Intelligence and Applications (Vol. 346, pp. 225–234). IOS Press BV. https://doi.org/10.3233/FAIA210341
Mendeley helps you to discover research relevant for your work.