3D-Posture Recognition Using Joint Angle Representation

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents an approach for action recognition performed by human using the joint angles from skeleton information. Unlike classical approaches that focus on the body silhouette, our approach uses body joint angles estimated directly from time-series skeleton sequences captured by depth sensor. In this context, 3D joint locations of skeletal data are initially processed. Furthermore, the 3D locations computed from the sequences of actions are described as the angles features. In order to generate prototypes of actions poses, joint features are quantized into posture visual words. The temporal transitions of the visual words are encoded as symbols for a Hidden Markov Model (HMM). Each action is trained through the HMM using the visual words symbols, following, all the trained HMM are used for action recognition. © Springer International Publishing Switzerland 2014.

Cite

CITATION STYLE

APA

Alwani, A. A., Chahir, Y., Goumidi, D. E., Molina, M., & Jouen, F. (2014). 3D-Posture Recognition Using Joint Angle Representation. In Communications in Computer and Information Science (Vol. 443 CCIS, pp. 106–115). Springer Verlag. https://doi.org/10.1007/978-3-319-08855-6_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free