Users' belief awareness in reinforcement learning-based situated human-robot dialogue management

1Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Others can have a different perception of the world than ours. Understanding this divergence is an ability, known as perspective taking in developmental psychology, that humans exploit in daily social interactions. A recent trend in robotics aims at endowing robots with similar mental mechanisms. The goal then is to enable them to naturally and efficiently plan tasks and communicate about them. In this paper we address this challenge extending a state-of-the-art goal-oriented dialogue management framework, the Hidden Information State (HIS). The new version makes use of the robot's awareness of the users' belief in a reinforcement learning-based situated dialogue management optimisation procedure. Thus the proposed solution enables the system to cope not only with the communication ambiguities due to noisy channel but also with the possible misunderstandings due to some divergence among the beliefs of the robot and its interlocutor in a human-robot interaction (HRI) context. We show the relevance of the approach by comparing different handcrafted and learnt dialogue policies with and without divergent belief reasoning in an in-house pick-place-carry scenario by means of user trials in a simulated 3D environment.

Cite

CITATION STYLE

APA

Ferreira, E., Milliez, G., Lefévre, F., & Alami, R. (2015). Users’ belief awareness in reinforcement learning-based situated human-robot dialogue management. In Natural Language Dialog Systems and Intelligent Assistants (pp. 73–86). Springer International Publishing. https://doi.org/10.1007/978-3-319-19291-8_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free