Double-meaning agreements by two robots to conceal incoherent agreements to user's opinions

13Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In conversations between people and social robots, it is important that robots show agreements to user's opinions from the aspect of building relationships. However, robot's agreements are often incoherent to user's opinions due to speech recognition failures. This paper proposes a new approach called double-meaning agreement to conceal the incoherence. In this approach, we exploit an interaction protocol by two robots. The interaction protocol makes an agreement that has a double meaning and enables a user to interpret robot's incoherent agreement as coherent. To evaluate effects of double-meaning agreement, we conducted an experiment. The results showed that participants who talked with two robots using double-meaning agreement had better feelings of being understood by the robots than those who talked with one robot without double-meaning agreement. These findings will contribute to developing social robots to keep a conversation coherent and to build social relationships.

Cite

CITATION STYLE

APA

Iio, T., Yoshikawa, Y., & Ishiguro, H. (2021). Double-meaning agreements by two robots to conceal incoherent agreements to user’s opinions. Advanced Robotics, 35(19), 1145–1155. https://doi.org/10.1080/01691864.2021.1974939

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free