Joint learning of convolution neural networks for RGB-D-based human action recognition

3Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

RGB-D-based human action recognition aims to learn distinctive features from different modalities and has shown good progress in practice. However, it is difficult to improve the recognition performance through directly training multiple individual convolutional networks (ConvNets) and fusing features later because complmentary information between different modalities cannot be learned. To address this issue, this Letter proposes a single two-stream ConvNets framework for multimodality learning that extract features through RGB and depth streams. Specifically, the authors first represent RGB-D sequence to motion images as the inputs of the proposed ConvNets for obtaining spatial-temporal information. Then, a features fusion and joint training strategy is adapted to learn RGB-D complementary features simultaneously. Experimental results on benchmark NTU RGB+D 120 dataset validate the effectiveness of the proposed framework and demonstrate that two-stream ConvNets outperforms the current state-of-the-art approaches.

Cite

CITATION STYLE

APA

Ren, Z., Zhang, Q., Qiao, P., Niu, M., Gao, X., & Cheng, J. (2020). Joint learning of convolution neural networks for RGB-D-based human action recognition. Electronics Letters, 56(21), 1112–1115. https://doi.org/10.1049/el.2020.2148

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free