Although AI is spread across all domains and many authors stated that providing explanations is crucial, another question comes into play: How accurate are those explanations? This paper aims to summarize a state-of-the-art review in XAI evaluation metrics, to present a categorization of evaluation methods and show a mapping between existing tools and theoretically defined metrics by underlining the challenges and future development. The contribution of this paper is to help researchers to identify and apply evaluation metrics when developing an XAI system and also to identify opportunities for proposing other evaluation metrics for XAI.
CITATION STYLE
Coroama, L., & Groza, A. (2022). Evaluation Metrics in Explainable Artificial Intelligence (XAI). In Communications in Computer and Information Science (Vol. 1675 CCIS, pp. 401–413). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-20319-0_30
Mendeley helps you to discover research relevant for your work.