Auditing Fairness and Explainability in Chest X-Ray Image Classifier

0Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Advancements in Artificial Intelligence have produced several tools that can be used in medical decision support systems. However, these models often exhibit the so-called ’black-box problem’: an algorithmic diagnosis is produced, but no human-understandable details about the decision process can be obtained. This raises critical questions about fairness and explainability, crucial for equitable healthcare. In this paper we focus on chest X-ray image classification, auditing the reproducibility of previous results in terms of model bias, exploring the applicability of Explainable AI (XAI) techniques, and auditing the fairness of the produced explanations. We highlight the challenges in assessing the quality of explanations provided by XAI methods, particularly in the absence of ground truth. In turn, this strongly hampers the possibility of comparing explanation quality across patients sub-groups, which is a cornerstone in fairness audits. Our experiments illustrate the complexities in achieving transparent AI interpretations in medical diagnostics, underscoring the need both for reliable XAI techniques and more robust fairness auditing methods.

Cite

CITATION STYLE

APA

Bordes, G. B., & Perotti, A. (2024). Auditing Fairness and Explainability in Chest X-Ray Image Classifier. In International Conference on Agents and Artificial Intelligence (Vol. 3, pp. 1308–1315). Science and Technology Publications, Lda. https://doi.org/10.5220/0012472400003636

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free