Cognitive reasoning and trust in human-robot interactions

5Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We are witnessing accelerating technological advances in autonomous systems, of which driverless cars and home-assistive robots are prominent examples. As mobile autonomy becomes embedded in our society, we increasingly often depend on decisions made by mobile autonomous robots and interact with them socially. Key questions that need to be asked are how to ensure safety and trust in such interac-tions. How do we know when to trust a robot? How much should we trust? And how much should the robots trust us? This paper will give an overview of a probabilistic logic for expressing trust between human or robotic agents such as “agent A has 99% trust in agent B’s ability or willingness to perform a task” and the role it can play in explaining trust-based decisions and agent’s dependence on one another. The logic is founded on a probabilistic notion of belief, supports cognitive reason-ing about goals and intentions, and admits quantitative verification via model checking, which can be used to evaluate trust in human-robot interactions. The paper concludes by summarising future challenges for modelling and verification in this important field.

Cite

CITATION STYLE

APA

Kwiatkowska, M. (2017). Cognitive reasoning and trust in human-robot interactions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10185 LNCS, pp. 3–11). Springer Verlag. https://doi.org/10.1007/978-3-319-55911-7_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free