Modeling gaze behavior for virtual demonstrators

3Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Achieving autonomous virtual humans with coherent and natural motions is key for being effective in many educational, training and therapeutic applications. Among several aspects to be considered, the gaze behavior is an important non-verbal communication channel that plays a vital role in the effectiveness of the obtained animations. This paper focuses on analyzing gaze behavior in demonstrative tasks involving arbitrary locations for target objects and listeners. Our analysis is based on full-body motions captured from human participants performing real demonstrative tasks in varied situations. We address temporal information and coordination with targets and observers at varied positions. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Huang, Y., Matthews, J. L., Matlock, T., & Kallmann, M. (2011). Modeling gaze behavior for virtual demonstrators. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6895 LNAI, pp. 155–161). https://doi.org/10.1007/978-3-642-23974-8_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free