Reviewing the need for explainable artificial intelligence (XAI)

34Citations
Citations of this article
146Readers
Mendeley users who have this article in their library.

Abstract

The diffusion of artificial intelligence (AI) applications in organizations and society has fueled research on explaining AI decisions. The explainable AI (xAI) field is rapidly expanding with numerous ways of extracting information and visualizing the output of AI technologies (e.g. deep neural networks). Yet, we have a limited understanding of how xAI research addresses the need for explainable AI. We conduct a systematic review of xAI literature on the topic and identify four thematic debates central to how xAI addresses the black-box problem. Based on this critical analysis of the xAI scholarship we synthesize the findings into a future research agenda to further the xAI body of knowledge.

Cite

CITATION STYLE

APA

Gerlings, J., Shollo, A., & Constantiou, I. (2021). Reviewing the need for explainable artificial intelligence (XAI). In Proceedings of the Annual Hawaii International Conference on System Sciences (Vol. 2020-January, pp. 1284–1293). IEEE Computer Society. https://doi.org/10.24251/hicss.2021.156

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free