The effect of agent reasoning transparency on automation bias: An analysis of response performance

15Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We examined how the transparency of an agent’s reasoning affected the human operator’s complacent behavior in a military route selection task. Participants guided a three-vehicle convoy through a simulated environment in which they had a limited amount of information about their surroundings, all this while maintaining communication with command and monitoring their surroundings for threats. The intelligent route-planning agent, RoboLeader, assessed potential threats and offered changes to the planned route as necessary. RoboLeader reliability was 66 %, and the participant had to correctly reject RoboLeader’s suggestion when incorrect. Access to RoboLeader’s reasoning was varied across three conditions (no reasoning, reasoning present, and increased reasoning transparency), and each participant was assigned to one of the three conditions. Access to agent reasoning improved performance and decreased automation bias. However, when reasoning transparency increased, performance decreased while automation bias increased. Implications for presentation of reasoning information in operational settings are discussed in light of these findings.

Cite

CITATION STYLE

APA

Wright, J. L., Chen, J. Y. C., Barnes, M. J., & Hancock, P. A. (2016). The effect of agent reasoning transparency on automation bias: An analysis of response performance. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9740, pp. 465–477). Springer Verlag. https://doi.org/10.1007/978-3-319-39907-2_45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free