Unraveling the Dilemma of AI Errors Exploring the Efectiveness of Human and Machine Explanations for Large Language Models

9Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The feld of eXplainable artifcial intelligence (XAI) has produced a plethora of methods (e.g., saliency-maps) to gain insight into artifcial intelligence (AI) models, and has exploded with the rise of deep learning (DL). However, human-participant studies question the efcacy of these methods, particularly when the AI output is wrong. In this study, we collected and analyzed 156 human-generated text and saliency-based explanations collected in a question-answering task (N = 40) and compared them empirically to state-of-the-art XAI explanations (integrated gradients, conservative LRP, and ChatGPT) in a human-participant study (N = 136). Our fndings show that participants found human saliency maps to be more helpful in explaining AI answers than machine saliency maps, but performance negatively correlated with trust in the AI model and explanations. This fnding hints at the dilemma of AI errors in explanation, where helpful explanations can lead to lower task performance when they support wrong AI predictions.

Cite

CITATION STYLE

APA

Pafa, M., Larson, K., & Hancock, M. (2024). Unraveling the Dilemma of AI Errors Exploring the Efectiveness of Human and Machine Explanations for Large Language Models. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery. https://doi.org/10.1145/3613904.3642934

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free