Spontaneous speech understanding for robust multi-modal human-robot communication

13Citations
Citations of this article
101Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a speech understanding component for enabling robust situated human-robot communication. The aim is to gain semantic interpretations of utterances that serve as a basis for multi-modal dialog management also in cases where the recognized word-stream is not grammatically correct. For the understanding process, we designed semantic processable units, which are adapted to the domain of situated communication. Our framework supports the specific characteristics of spontaneous speech used in combination with gestures in a real world scenario. It also provides information about the dialog acts. Finally, we present a processing mechanism using these concept structures to generate the most likely semantic interpretation of the utterances and to evaluate the interpretation with respect to semantic coherence.

Cite

CITATION STYLE

APA

Hüwel, S., & Wrede, B. (2006). Spontaneous speech understanding for robust multi-modal human-robot communication. In COLING/ACL 2006 - 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Main Conference Poster Sessions (pp. 391–398). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1273073.1273124

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free