A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making

35Citations
Citations of this article
95Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Research in artificial intelligence (AI)-Assisted decision-making is experiencing tremendous growth with a constantly rising number of studies evaluating the effect of AI with and without techniques from the field of explainable AI (XAI) on human decision-making performance. However, as tasks and experimental setups vary due to different objectives, some studies report improved user decision-making performance through XAI, while others report only negligible effects. Therefore, in this article, we present an initial synthesis of existing research on XAI studies using a statistical meta-Analysis to derive implications across existing research. We observe a statistically positive impact of XAI on users' performance. Additionally, the first results indicate that human-AI decision-making tends to yield better task performance on text data. However, we find no effect of explanations on users' performance compared to sole AI predictions. Our initial synthesis gives rise to future research investigating the underlying causes and contributes to further developing algorithms that effectively benefit human decision-makers by providing meaningful explanations.

Cite

CITATION STYLE

APA

Schemmer, M., Hemmer, P., Nitsche, M., Kuhl, N., & Vossing, M. (2022). A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making. In AIES 2022 - Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (pp. 617–626). Association for Computing Machinery, Inc. https://doi.org/10.1145/3514094.3534128

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free