Robot Theory of Mind with Reverse Psychology

1Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Theory of mind (ToM) corresponds to the human ability to infer other people's desires, beliefs, and intentions. Acquisition of ToM skills is crucial to obtain a natural interaction between robots and humans. A core component of ToM is the ability to attribute false beliefs. In this paper, a collaborative robot tries to assist a human partner who plays a trust-based card game against another human. The robot infers its partner's trust in the robot's decision system via reinforcement learning. Robot ToM refers to the ability to implicitly anticipate the human collaborator's strategy and inject the prediction into its optimal decision model for a better team performance. In our experiments, the robot learns when its human partner does not trust the robot and consequently gives recommendations in its optimal policy to ensure the effectiveness of team performance. The interesting finding is that the optimal robotic policy attempts to use reverse psychology on its human collaborator when trust is low. This finding will provide guidance for the study of a trustworthy robot decision model with a human partner in the loop.

Cite

CITATION STYLE

APA

Yu, C., Serhan, B., Romeo, M., & Cangelosi, A. (2023). Robot Theory of Mind with Reverse Psychology. In ACM/IEEE International Conference on Human-Robot Interaction (pp. 545–547). IEEE Computer Society. https://doi.org/10.1145/3568294.3580144

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free