Activity gesture recognition on kinect sensor using convolutional neural networks and FastDTW for the MSRC-12 dataset

4Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we use data from the Microsoft Kinect sensor that processes the captured image of a person, thus, reducing the number of data in just joints on each frame. Then, we propose a creation of an image from all the frames removed from the movement, which facilitates training in a convolutional neural network. Finally, we trained a CNN using two different forms of training: combined training and individual training using the MSRC-12 dataset. Thus, the trained network obtained an accuracy rate of 86.67% in combined training and 90.78% of accuracy rate in the individual training, which is a very good performance compared to related works. This demonstrates that networks based on convolutional networks can be effective for the recognition of human actions using joints.

Cite

CITATION STYLE

APA

Pfitscher, M., Welfer, D., de Souza Leite Cuadros, M. A., & Gamarra, D. F. T. (2020). Activity gesture recognition on kinect sensor using convolutional neural networks and FastDTW for the MSRC-12 dataset. In Advances in Intelligent Systems and Computing (Vol. 940, pp. 230–239). Springer Verlag. https://doi.org/10.1007/978-3-030-16657-1_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free