Assistive robot multi-modal interaction with augmented 3D vision and dialogue

3Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a multi-modal interface for interaction between people with physical disabilities and an assistive robot. This interaction is performed through a dialogue mechanism and augmented 3D vision glasses to provide visual assistance to an end user commanding an assistive robot to perform Daily Life Activities (DLAs). The augmented 3D vision glasses may provide augmented reality vision of menus and information dialogues over the view of the real world, or in a simulator environment for laboratory tests and user evaluation. The actual dialogue is implemented as a finite state machine, and includes possibilities of Automatic Speech Recognition (ASR), and a Text-to-Speech (TTS) converter. The final study focuses on studying the effectiveness of these visual and auditory aids for enabling the end user to command the assistive robot ASIBOT to perform a given task. Keywords: assistive robotics, end-user development, human-robot interaction, multi-modal interaction, augmented reality, speech recognition

Cite

CITATION STYLE

APA

Victores, J. G., Ca Ñ Adillas, F. R., Morante, S., Jardón, A., & Balaguer, C. (2014). Assistive robot multi-modal interaction with augmented 3D vision and dialogue. In Advances in Intelligent Systems and Computing (Vol. 252, pp. 209–217). Springer Verlag. https://doi.org/10.1007/978-3-319-03413-3_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free