Is explainable AI responsible AI?

13Citations
Citations of this article
42Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

When artificial intelligence (AI) is used to make high-stakes decisions, some worry that this will create a morally troubling responsibility gap—that is, a situation in which nobody is morally responsible for the actions and outcomes that result. Since the responsibility gap might be thought to result from individuals lacking knowledge of the future behavior of AI systems, it can be and has been suggested that deploying explainable artificial intelligence (XAI) techniques will help us to avoid it. These techniques provide humans with certain forms of understanding of the systems in question. In this paper, I consider whether existing XAI techniques can indeed close the responsibility gap. I identify a number of significant limits to their ability to do so. Ensuring that responsibility for AI-assisted outcomes is maintained may require using different techniques in different circumstances, and potentially also developing new techniques that can avoid each of the issues identified.

Cite

CITATION STYLE

APA

Taylor, I. (2025). Is explainable AI responsible AI? AI and Society, 40(3), 1695–1704. https://doi.org/10.1007/s00146-024-01939-7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free