Understanding human behaviors based on eye-head-hand coordination

15Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Action recognition has traditionally focused on processing fixed camera observations while ignoring non-visual information. In this paper, we explore the dynamic properties of the movements of different body parts in natural tasks: eye, head and hand movements are quite tightly coupled with the ongoing task. In light of this, our method takes an agent-centered view and incorporates an extensive description of eye-head-hand coordination. With the ability to track the course of gaze and head movements, our approach uses gaze and head cues to detect agent-centered attention switches that can then be utilized to segment an action sequence into action units. Based on recognizing those action primitives, parallel hidden Markov models are applied to model and integrate the probabilistic sequences of the action units of different body parts. An experimental system is built for recognizing human behaviors in three natural tasks: “unscrewing ajar”, “stapling a letter” and “pouring water”, which demonstrates the effectiveness of the approach.

Cite

CITATION STYLE

APA

Yu, C., & Ballard, D. H. (2002). Understanding human behaviors based on eye-head-hand coordination. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2525, pp. 611–619). Springer Verlag. https://doi.org/10.1007/3-540-36181-2_61

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free