A combined probabilistic framework for learning gestures and actions

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we introduce a probabilistic approach to support visual supervision and gesture recognition. Task knowledge is both of geometric and visual nature and it is encoded in parametric eigenspaces. Learning processes for compute modal subspaces (eigenspaces) are the core of tracking and recognition of gestures and tasks. We describe the overall architecture of the system and detail learning processes and gesture design. Finally we show experimental results of tracking and recognition in block-world like assembling tasks and in general human gestures.

Cite

CITATION STYLE

APA

Escolano, F., Cazorla, M., Gallardo, D., Llorens, F., Satorre, R., & Rizo, R. (1998). A combined probabilistic framework for learning gestures and actions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1416, pp. 658–667). Springer Verlag. https://doi.org/10.1007/3-540-64574-8_452

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free