Practical Guidance for Evaluating Calibrated Trust

16Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

Abstract

As automation, autonomy, and AI become more prevalent, human factors engineers are called to evaluate whether users trust that automation. We argue that the true question is whether users trust the automation appropriately. In other words, do they trust it as much as it deserves to be trusted? When automation performance decreases, are users aware so they decrease their momentary trust, and more importantly, their reliance on the automation? There are few metrics that focus specifically on calibrated trust, and the trust literature can be daunting for human factors professionals who are not experts in trust. We offer two aids to human factors practitioners tasked with evaluating trust. The first is an easy-to-use calibrated trust framework that simplifies the aspects of trust into Belief, Understanding, Intent, and Reliance. The second is the introduction of Calibration Points, a way to classify situations in which the automation excels or situations in which the automation is degraded. By identifying these Calibration Points, human factors practitioners can evaluate whether human trust is aligned with automation performance. This approach allows the practitioner to leverage the rich set of evaluation techniques that have been developed to evaluate trust.

Cite

CITATION STYLE

APA

McDermott, P. L., & Ten Brink, R. N. (2019). Practical Guidance for Evaluating Calibrated Trust. In Proceedings of the Human Factors and Ergonomics Society (Vol. 63, pp. 362–366). SAGE Publications Inc. https://doi.org/10.1177/1071181319631379

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free