Evaluation Metrics in Explainable Artificial Intelligence (XAI)

8Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Although AI is spread across all domains and many authors stated that providing explanations is crucial, another question comes into play: How accurate are those explanations? This paper aims to summarize a state-of-the-art review in XAI evaluation metrics, to present a categorization of evaluation methods and show a mapping between existing tools and theoretically defined metrics by underlining the challenges and future development. The contribution of this paper is to help researchers to identify and apply evaluation metrics when developing an XAI system and also to identify opportunities for proposing other evaluation metrics for XAI.

Cite

CITATION STYLE

APA

Coroama, L., & Groza, A. (2022). Evaluation Metrics in Explainable Artificial Intelligence (XAI). In Communications in Computer and Information Science (Vol. 1675 CCIS, pp. 401–413). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-20319-0_30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free