Uncertainty in XAI: Human Perception and Modeling Approaches

0Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Artificial Intelligence (AI) plays an increasingly integral role in decision-making processes. In order to foster trust in AI predictions, many approaches towards explainable AI (XAI) have been developed and evaluated. Surprisingly, one factor that is essential for trust has been underrepresented in XAI research so far: uncertainty, both with respect to how it is modeled in Machine Learning (ML) and XAI as well as how it is perceived by humans relying on AI assistance. This review paper provides an in-depth analysis of both aspects. We review established and recent methods to account for uncertainty in ML models and XAI approaches and we discuss empirical evidence on how model uncertainty is perceived by human users of XAI systems. We summarize the methodological advancements and limitations of methods and human perception. Finally, we discuss the implications of the current state of the art in model development and research on human perception. We believe highlighting the role of uncertainty in XAI will be helpful to both practitioners and researchers and could ultimately support more responsible use of AI in practical applications.

Cite

CITATION STYLE

APA

Chiaburu, T., Haußer, F., & Bießmann, F. (2024). Uncertainty in XAI: Human Perception and Modeling Approaches. Machine Learning and Knowledge Extraction, 6(2), 1170–1192. https://doi.org/10.3390/make6020055

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free