Explainable AI Using the Wasserstein Distance

2Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

AI-based decision systems often lack transparency due to their black-box nature and lack explanations of their decisions, which are crucial for life-changing applications such as disease diagnosis, financial investments, and military decisions. Explainable AI (XAI) deals with explanations, justifications, and accountability of AI applications and has become the call of the present time. However, there is a dearth of XAI protocols which associate technicality and usability. In this paper, novel, usable XAI definitions are introduced, where Wasserstein distance serves as the backbone. The key essence of our work is to integrate the mathematical formulation with the performances of the model. Our work provides definitions in three contexts - i) the explainability of a model, ii) the explainability of the features, and iii) the explainability of the decisions rendered by a model. In this work, the proposed constructions are validated through experiments done on finitely many different models. The empirical results on synthetic and real-world datasets validate a positive association between proposed explainabilities and the performance of a model.

Cite

CITATION STYLE

APA

Chaudhury, S. S., Sadhukhan, P., & Sengupta, K. (2024). Explainable AI Using the Wasserstein Distance. IEEE Access, 12, 18087–18102. https://doi.org/10.1109/ACCESS.2024.3360484

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free