Human action recognition of spatiotemporal parameters for skeleton sequences using mtln feature learning framework

11Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

Human action recognition (HAR) by skeleton data is considered a potential research aspect in computer vision. Three-dimensional HAR with skeleton data has been used commonly because of its effective and efficient results. Several models have been developed for learning spatiotemporal parameters from skeleton sequences. However, two critical problems exist: (1) previous skeleton sequences were created by connecting different joints with a static order; (2) earlier methods were not efficient enough to focus on valuable joints. Specifically, this study aimed to (1) demonstrate the ability of convolutional neural networks to learn spatiotemporal parameters of skeleton sequences from different frames of human action, and (2) to combine the process of all frames created by different human actions and fit in the spatial structure information necessary for action recognition, using multi-task learning networks (MTLNs). The results were significantly improved compared with existing models by executing the proposed model on an NTU RGB+D dataset, an SYSU dataset, and an SBU Kinetic Interaction dataset. We further implemented our model on noisy expected poses from subgroups of the Kinetics dataset and the UCF101 dataset. The experimental results also showed significant improvement using our proposed model.

Cite

CITATION STYLE

APA

Mehmood, F., Chen, E., Akbar, M. A., & Alsanad, A. A. (2021). Human action recognition of spatiotemporal parameters for skeleton sequences using mtln feature learning framework. Electronics (Switzerland), 10(21). https://doi.org/10.3390/electronics10212708

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free