Since the fourth industrial revolution began, Artificial Intelligence (AI) systems have become cornerstones in the activities of many organizations. Nonetheless, the application of ML and AI techniques is often limited due to mistrust among users regarding the results obtained by the algorithms. This can lead to decisions not being made correctly. Both facts demonstrate that current algorithms need to be interpretable and transparent. EXplainable Artificial Intelligence (XAI) techniques are paramount to this objective as they translate black-box algorithms into transparent logic for developers and users. However, despite different XAI approaches evaluate dimensions through metrics, there is no consensus on how to evaluate XAI. In an effort to standardize the evaluation of XAI and allow users the comparison of the ideal XAI solution for their problem, we present an approach focused on the evaluation of XAI quality, which is able to be applied in Model Driven Development (MDD) scenarios. The great advantages of our proposal are that it (i) provides a set of quality metrics to evaluate the multiple relevant XAI and AI dimensions, (ii) provides a holistic evaluation thanks to these quality metrics, (iii) emphasizes the relevance of identifying the different target users involved, and (iv) enables the comparison across different XAI alternatives. In order to show the applicability of our proposal, we apply it to a widely-known case study: chess.
CITATION STYLE
Navarro, Á., Sanchis, J., Maté, A., & Trujillo, J. (2023). An Approach Aligned with Model Driven Development to Evaluate the Quality of Explainable Artificial Intelligence. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14319 LNCS, pp. 284–293). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-47112-4_27
Mendeley helps you to discover research relevant for your work.