Aggregating low-level features for human action recognition

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent methods for human action recognition have been effective using increasingly complex, computationally-intensive models and algorithms. There has been growing interest in automated video analysis techniques which can be deployed onto resource-constrained distributed smart camera networks. In this paper, we introduce a multi-stage method for recognizing human actions (e.g., kicking, sitting, waving) that uses the motion patterns of easy-to-compute, low-level image features. Our method is designed for use on resource-constrained devices and can be optimized for real-time performance. In single-view and multi-view experiments, our method achieves 78% and 84% accuracy, respectively, on a publicly available data set. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

Parrigan, K., & Souvenir, R. (2010). Aggregating low-level features for human action recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6453 LNCS, pp. 143–152). https://doi.org/10.1007/978-3-642-17289-2_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free