View invariant activity recognition with manifold learning

4Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Activity recognition in complex scenes can be very challenging because human actions are unconstrained and may be observed from multiple views. While progress has been made in recognizing activities from fixed views, more research is needed in developing view invariant recognition methods. Furthermore, the recognition and classification of activities involves processing data in the space and time domains, which involves large amounts of data and can be computationally expensive to process. To accommodate for view invariance and high dimensional data we propose the use of Manifold Learning using Locality Preserving Projections (LPP). We develop an efficient set of features based on radial distance and present a Manifold Learning framework for learning low dimensional representations of action primitives that can be used to recognize activities at multiple views. Using our approach we present high recognition rates on the Inria IXMAS dataset. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

Azary, S., & Savakis, A. (2010). View invariant activity recognition with manifold learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6454 LNCS, pp. 606–615). https://doi.org/10.1007/978-3-642-17274-8_59

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free