Trust in imperfect automation

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The types of unreliability that an automated system may express can have effects on a user’s perception of that automation’s overall operational ability. A software program which makes one type of mistake might be judged more harshly than another program which makes a different sort of error; even if both have equal success rates. Here I use a Hidden Object Game to examine people’s different responses to a program when it appears to either miss its target objects or, alternatively, to make false alarms. Playing at both high and low clutter levels, participants who believed they were working with an automated system which missed targets decreased their trust in that automation, and judged its performance more harshly, compared to participants who believed the automation was making false alarms. Participants in the combined low clutter and miss condition showed the strongest decrease in trust. When asked to guess how often the program had been correct this group also gave it the lowest mean score. These results demonstrates that in a target detection task, automation that misses targets will be judged more harshly than automation that errs on the side of false alarms.

Author supplied keywords

Cite

CITATION STYLE

APA

Kaplan, A. (2019). Trust in imperfect automation. In Advances in Intelligent Systems and Computing (Vol. 824, pp. 47–53). Springer Verlag. https://doi.org/10.1007/978-3-319-96071-5_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free