This paper presents an approach for action recognition performed by human using the joint angles from skeleton information. Unlike classical approaches that focus on the body silhouette, our approach uses body joint angles estimated directly from time-series skeleton sequences captured by depth sensor. In this context, 3D joint locations of skeletal data are initially processed. Furthermore, the 3D locations computed from the sequences of actions are described as the angles features. In order to generate prototypes of actions poses, joint features are quantized into posture visual words. The temporal transitions of the visual words are encoded as symbols for a Hidden Markov Model (HMM). Each action is trained through the HMM using the visual words symbols, following, all the trained HMM are used for action recognition. © Springer International Publishing Switzerland 2014.
CITATION STYLE
Alwani, A. A., Chahir, Y., Goumidi, D. E., Molina, M., & Jouen, F. (2014). 3D-Posture Recognition Using Joint Angle Representation. In Communications in Computer and Information Science (Vol. 443 CCIS, pp. 106–115). Springer Verlag. https://doi.org/10.1007/978-3-319-08855-6_12
Mendeley helps you to discover research relevant for your work.