Unsupervised discovery, modeling, and analysis of long term activities

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This work proposes a complete framework for human activity discovery, modeling, and recognition using videos. The framework uses trajectory information as input and goes up to video interpretation. The work reduces the gap between low-level vision information and semantic interpretation, by building an intermediate layer composed of Primitive Events. The proposed representation for primitive events aims at capturing meaningful motions (actions) over the scene with the advantage of being learned in an unsupervised manner. We propose the use of Primitive Events as descriptors to discover, model, and recognize activities automatically. The activity discovery is performed using only real tracking data. Semantics are added to the discovered activities (e.g., "Preparing Meal", "Eating") and the recognition of activities is performed with new datasets. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Pusiol, G., Bremond, F., & Thonnat, M. (2011). Unsupervised discovery, modeling, and analysis of long term activities. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6962 LNCS, pp. 101–111). https://doi.org/10.1007/978-3-642-23968-7_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free