Modeling sense disambiguation of human pose: Recognizing action at a distance by key poses

5Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a methodology for recognizing actions at a distance by watching the human poses and deriving descriptors that capture the motion patterns of the poses. Human poses often carry a strong visual sense (intended meaning) which describes the related action unambiguously. But identifying the intended meaning of poses is a challenging task because of their variability and such variations in poses lead to visual sense ambiguity. From a large vocabulary of poses (visual words) we prune out ambiguous poses and extract key poses (or key words) using centrality measure of graph connectivity [1]. Under this framework, finding the key poses for a given sense (i.e., action type) amounts to constructing a graph with poses as vertices and then identifying the most "important" vertices in the graph (following centrality theory). The results on four standard activity recognition datasets show the efficacy of our approach when compared to the present state of the art. © 2011 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Mukherjee, S., Biswas, S. K., & Mukherjee, D. P. (2011). Modeling sense disambiguation of human pose: Recognizing action at a distance by key poses. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6492 LNCS, pp. 244–255). https://doi.org/10.1007/978-3-642-19315-6_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free