Time invariant gesture recognition by modelling body posture space

5Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a framework for recognizing actions or gestures by modelling variations of the corresponding shape postures with respect to each action class thereby removing the need for normalization for the speed of motion. The three main aspects are the shape descriptor suitable for describing its posture, the formation of a suitable posture space, and a regression mechanism to model the posture variations with respect to each action class. Histogram of gradients(HOG) is used as the shape descriptor with the variations being mapped to a reduced Eigenspace by PCA. The mapping of each action class from the HOG space to the reduced Eigen space is done using GRNN. Classification is performed by comparing the points on the Eigen space to those determined by each of the action model using Mahalanobis distance. The framework is evaluated on Weizmann action dataset and Cambridge Hand Gesture dataset providing significant and positive results. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Nair, B. M., & Asari, V. K. (2012). Time invariant gesture recognition by modelling body posture space. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7345 LNAI, pp. 124–133). https://doi.org/10.1007/978-3-642-31087-4_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free