Combining Visual and Social Dialogue for Human-Robot Interaction

1Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We will demonstrate a prototype multimodal conversational AI system that will act as a receptionist in a hospital waiting room, combining visually-grounded dialogue with social conversation. The system supports visual object conversation in the waiting room (e.g. looking for available seats or personal belongings), task-based dialogues regarding navigation and check-in procedures in the hospital, as well as access to the latest news, and a quiz game about coronavirus. The prototype system therefore demonstrates how to weave together a wide range of natural, daily conversations with end users that vary in complexity; from complex visual dialogue to chitchat and quiz games, to task-oriented domain-specific conversations. We are currently able to demonstrate the system via a web-based interface. It will soon be deployed on the ARI robot in a hospital waiting room.

Author supplied keywords

Cite

CITATION STYLE

APA

Gunson, N., Hernandez Garcia, D., Part, J. L., Yu, Y., Sieińska, W., Dondrup, C., & Lemon, O. (2021). Combining Visual and Social Dialogue for Human-Robot Interaction. In ICMI 2021 - Proceedings of the 2021 International Conference on Multimodal Interaction (pp. 841–842). Association for Computing Machinery, Inc. https://doi.org/10.1145/3462244.3481303

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free