Multimodal interaction abilities for a robot companion

15Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Among the cognitive abilities a robot companion must be endowed with, human perception and speech understanding are both fundamental in the context of multimodal human-robot interaction. In order to provide a mobile robot with the visual perception of its user and means to handle verbal and multimodal communication, we have developed and integrated two components. In this paper we will focus on an interactively distributed multiple object tracker dedicated to two-handed gestures and head location in 3D. Its relevance is highlighted by in- and off- line evaluations from data acquired by the robot. Implementation and preliminary experiments on a household robot companion, including speech recognition and understanding as well as basic fusion with gesture, are then demonstrated. The latter illustrate how vision can assist speech by specifying location references, object/person IDs in verbal statements in order to interpret natural deictic commands given by human beings. Extensions of our work are finally discussed. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Burger, B., Ferrané, I., & Lerasle, F. (2008). Multimodal interaction abilities for a robot companion. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5008 LNCS, pp. 549–558). https://doi.org/10.1007/978-3-540-79547-6_53

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free