(Over)Trusting AI Recommendations: How System and Person Variables Affect Dimensions of Complacency

4Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Over-trusting AI systems can lead to complacency and decision errors. However, human and system variables may affect complacency and it is important to understand their interplay for HCI. In our experiment, 90 participants were confronted with traffic route problems guided by AI recommendations and thereby assigned to either a transparent system providing reasons for recommendations or a non-transparent system. We found transparent systems to lower the potential to alleviate workload (albeit not to neglect monitoring), but to simultaneously foster actual complacent behavior. On the contrary, we found performance expectancy to foster the potential to alleviate workload, but not complacent behavior. Interaction analyses showed that effects of performance expectancy depend on system transparency. This contributes to our understanding how system- and person-related variables interact in affecting complacency and stresses the differences between dimensions of complacency and the need for carefully considering transparency and performance expectancy in AI research and design.

Cite

CITATION STYLE

APA

Harbarth, L., Göβwein, E., Bodemer, D., & Schnaubert, L. (2024). (Over)Trusting AI Recommendations: How System and Person Variables Affect Dimensions of Complacency. International Journal of Human-Computer Interaction. https://doi.org/10.1080/10447318.2023.2301250

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free