View and style-independent action manifolds for human activity recognition

24Citations
Citations of this article
60Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We introduce a novel approach to automatically learn intuitive and compact descriptors of human body motions for activity recognition. Each action descriptor is produced, first, by applying Temporal Laplacian Eigenmaps to view-dependent videos in order to produce a stylistic invariant embedded manifold for each view separately. Then, all view-dependent manifolds are automatically combined to discover a unified representation which model in a single three dimensional space an action independently from style and viewpoint. In addition, a bidirectional nonlinear mapping function is incorporated to allow projecting actions between original and embedded spaces. The proposed framework is evaluated on a real and challenging dataset (IXMAS), which is composed of a variety of actions seen from arbitrary viewpoints. Experimental results demonstrate robustness against style and view variation and match the most accurate action recognition method. © 2010 Springer-Verlag.

Author supplied keywords

Cite

CITATION STYLE

APA

Lewandowski, M., Makris, D., & Nebel, J. C. (2010). View and style-independent action manifolds for human activity recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6316 LNCS, pp. 547–560). Springer Verlag. https://doi.org/10.1007/978-3-642-15567-3_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free