Understandable robots

121Citations
Citations of this article
93Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

As robots become more and more capable and autonomous, there is an increasing need for humans to understand what the robots do and think. In this paper, we investigate what such understanding means and includes, and how robots can be designed to support understanding. After an in-depth survey of related earlier work, we discuss examples showing that understanding includes not only the intentions of the robot, but also desires, knowledge, beliefs, emotions, perceptions, capabilities, and limitations of the robot. The term understanding is formally defined, and the term communicative actions is defined to denote the various ways in which a robot may support a human's understanding of the robot. A novel model of interaction for understanding is presented. The model describes how both human and robot may utilize a first or higher-order theory of mind to understand each other and perform communicative actions in order to support the other's understanding. It also describes simpler cases in which the robot performs static communicative actions in order to support the human's understanding of the robot. In general, communicative actions performed by the robot aim at reducing the mismatch between the mind of the robot, and the robot's inferred model of the human's model of the mind of the robot. Based on the proposed model, a set of questions are formulated, to serve as support when developing and implementing the model in real interacting robots.

Cite

CITATION STYLE

APA

Hellström, T., & Bensch, S. (2018). Understandable robots. Paladyn, 9(1), 110–123. https://doi.org/10.1515/pjbr-2018-0009

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free