Information state based multimodal dialogue management: Estimating conversational engagement from gaze information

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Thanks to the progress of computer vision technologies and human sensing technologies, human behaviors, such as gaze and head poses, can be accurately measured in real time. Previous studies in multimodal user interfaces and intelligent virtual agents presented many interesting applications by exploiting such sensing technologies [1, 2]. However, little has been studied how to extract communication signals from a huge amount of data, and how to use such data in dialogue management in conversational agents. © 2009 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Nakano, Y., & Yamaoka, Y. (2009). Information state based multimodal dialogue management: Estimating conversational engagement from gaze information. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5773 LNAI, pp. 531–532). https://doi.org/10.1007/978-3-642-04380-2_77

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free