The Perceptual Belief Problem: Why Explainability Is a Tough Challenge in Social Robotics

31Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The explainability of robotic systems depends on people's ability to reliably attribute perceptual beliefs to robots, i.e., what robots know (or believe) about objects and events in the world based on their perception. However, the perceptual systems of robots are not necessarily well understood by the majority of people interacting with them. In this article, we explain why this is a significant, difficult, and unique problem in social robotics. The inability to judge what a robot knows (and does not know) about the physical environment it shares with people gives rise to a host of communicative and interactive issues, including difficulties to communicate about objects or adapt to events in the environment. The challenge faced by social robotics researchers or designers who want to facilitate appropriate attributions of perceptual beliefs to robots is to shape human-robot interactions so that people understand what robots know about objects and events in the environment. To meet this challenge, we argue, it is necessary to advance our knowledge of when and why people form incorrect or inadequate mental models of robots' perceptual and cognitive mechanisms. We outline a general approach to studying this empirically and discuss potential solutions to the problem.

Cite

CITATION STYLE

APA

Thellman, S., & Ziemke, T. (2021). The Perceptual Belief Problem: Why Explainability Is a Tough Challenge in Social Robotics. ACM Transactions on Human-Robot Interaction, 10(3). https://doi.org/10.1145/3461781

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free