In this paper we introduce a probabilistic approach to support visual supervision and gesture recognition. Task knowledge is both of geometric and visual nature and it is encoded in parametric eigenspaces. Learning processes for compute modal subspaces (eigenspaces) are the core of tracking and recognition of gestures and tasks. We describe the overall architecture of the system and detail learning processes and gesture design. Finally we show experimental results of tracking and recognition in block-world like assembling tasks and in general human gestures.
CITATION STYLE
Escolano, F., Cazorla, M., Gallardo, D., Llorens, F., Satorre, R., & Rizo, R. (1998). A combined probabilistic framework for learning gestures and actions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1416, pp. 658–667). Springer Verlag. https://doi.org/10.1007/3-540-64574-8_452
Mendeley helps you to discover research relevant for your work.