A General-Purpose Method for Applying Explainable AI for Anomaly Detection

5Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The need for explainable AI (XAI) is well established but relatively little has been published outside of the supervised learning paradigm. This paper focuses on a principled approach to applying explainability and interpretability to the task of unsupervised anomaly detection. We argue that explainability is principally an algorithmic task and interpretability is principally a cognitive task, and draw on insights from the cognitive sciences to propose a general-purpose method for practical diagnosis using explained anomalies. We define Attribution Error, and demonstrate, using real-world labeled datasets, that our method based on Integrated Gradients (IG) yields significantly lower attribution errors than alternative methods.

Cite

CITATION STYLE

APA

Sipple, J., & Youssef, A. (2022). A General-Purpose Method for Applying Explainable AI for Anomaly Detection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13515 LNAI, pp. 162–174). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-16564-1_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free