Adaptive trust calibration for supervised autonomous vehicles

19Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Poor trust calibration in autonomous vehicles often degrades total system performance in safety or efficiency. Existing studies have primarily examined the importance of system transparency of autonomous systems to maintain proper trust calibration, with little emphasis on how to detect over-trust and under-trust nor how to recover from them. With the goal of addressing these research gaps, we first provide a framework to detect a calibration status on the basis of the user's behavior of reliance. We then propose a new concept with cognitive cues called trust calibration cues (TCCs) to trigger the user to quickly restore appropriate trust calibration. With our framework and TCCs, a novel method of adaptive trust calibration is explored in this study. We will evaluate our framework and examine the effectiveness of TCCs with a newly developed online drone simulator.

Cite

CITATION STYLE

APA

Okamura, K., & Yamada, S. (2018). Adaptive trust calibration for supervised autonomous vehicles. In Adjunct Proceedings - 10th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2018 (pp. 92–97). Association for Computing Machinery, Inc. https://doi.org/10.1145/3239092.3265948

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free