Interpretation of gestures and speech: A practical approach to multimodal communication

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Developing multimodal interfaces is not only a matter of technology. Rather, it implies an adequate tailoring of the interface to the user’s communication needs. In command and control applications, the user most often has the initiative, and in that perspective gestures and speech (the user’s communication channels) have to be carefully studied to support a sensible interaction style. In this chapter, we introduce the notion of semantic frame to integrate gestures and speech in multimodal interfaces. We describe the main elements of a model that has been developed to integrate the use of both channels, and illustrate the model by two fully implemented systems. Possible extensions of the model are presented to improve the supported style, as technologies develop.

Cite

CITATION STYLE

APA

Pouteau, X. (2001). Interpretation of gestures and speech: A practical approach to multimodal communication. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2155, pp. 159–175). Springer Verlag. https://doi.org/10.1007/3-540-45520-5_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free