Reinforcing trust in autonomous systems: A quantum cognitive approach

5Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

We investigated if an autonomous system can be provided with reasoning that maintains trust between human and system even when human and autonomous system reach discrepant conclusions. Tversky and Kahneman’s research [27] and the vast literature following it distinguishes two modes of human decision making: System 1, which is fast, emotional, and automatic, and System 2 which is slower, more deliberative, and more rational. Autonomous systems are thus far endowed with System 2. So when interacting with such a system, humans may follow System 1 unawares that their autonomous partner follows System 2. This can easily confuse the user when a discrepant decision is reached, eroding their trust in the autonomous system. Hence we investigated if trust in the message could interfere with trust its source, namely the autonomous system. For this we presented participants with images that might or might not be genuine, and found that they often distrusted the image (e.g., as photoshopped) when they distrusted its content. We present a quantum cognitive model that explains this interference. We speculate that enriching an autonomous system with this model will allow it to predict when its decisions may confuse the user, take pro-active steps to prevent this, and with it reinforce and maintain trust in the system.

References Powered by Scopus

Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment

2709Citations
N/AReaders
Get full text

Ubiquitous quantum structure: From psychology to finance

516Citations
N/AReaders
Get full text

A quantum probability explanation for violations of 'rational' decision theory

366Citations
N/AReaders
Get full text

Cited by Powered by Scopus

A quantum-inspired model for human-automation trust in air traffic controllers derived from functional Magnetic Resonance Imaging and correlated with behavioural indicators

8Citations
N/AReaders
Get full text

Temporal Evolution of Trust in Artificial Intelligence-Supported Decision-Making

3Citations
N/AReaders
Get full text

Intermediate Judgments and Trust in Artificial Intelligence-Supported Decision-Making

1Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Bruza, P. D., & Hoenkamp, E. C. (2018). Reinforcing trust in autonomous systems: A quantum cognitive approach. In Studies in Systems, Decision and Control (Vol. 117, pp. 215–224). Springer International Publishing. https://doi.org/10.1007/978-3-319-64816-3_12

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 4

40%

Researcher 4

40%

Professor / Associate Prof. 2

20%

Readers' Discipline

Tooltip

Computer Science 3

43%

Psychology 2

29%

Decision Sciences 1

14%

Neuroscience 1

14%

Save time finding and organizing research with Mendeley

Sign up for free