We investigated if an autonomous system can be provided with reasoning that maintains trust between human and system even when human and autonomous system reach discrepant conclusions. Tversky and Kahneman’s research [27] and the vast literature following it distinguishes two modes of human decision making: System 1, which is fast, emotional, and automatic, and System 2 which is slower, more deliberative, and more rational. Autonomous systems are thus far endowed with System 2. So when interacting with such a system, humans may follow System 1 unawares that their autonomous partner follows System 2. This can easily confuse the user when a discrepant decision is reached, eroding their trust in the autonomous system. Hence we investigated if trust in the message could interfere with trust its source, namely the autonomous system. For this we presented participants with images that might or might not be genuine, and found that they often distrusted the image (e.g., as photoshopped) when they distrusted its content. We present a quantum cognitive model that explains this interference. We speculate that enriching an autonomous system with this model will allow it to predict when its decisions may confuse the user, take pro-active steps to prevent this, and with it reinforce and maintain trust in the system.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Bruza, P. D., & Hoenkamp, E. C. (2018). Reinforcing trust in autonomous systems: A quantum cognitive approach. In Studies in Systems, Decision and Control (Vol. 117, pp. 215–224). Springer International Publishing. https://doi.org/10.1007/978-3-319-64816-3_12