A hybrid deep learning architecture using 3D CNNs and GRUs for human action recognition

10Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Video contents have variations in temporal and spatial dimensions, and recognizing human actions requires considering the changes in both directions. To this end, convolutional neural networks (CNNs) and recurrent neural networks (RNNs) and their combinations have been used to tackle the video dynamics. However, a hybrid architecture usually results in a more complex model and hence a greater number of parameters to be optimized. In this study, we propose to use a stack of gated recurrent unit (GRU) layers on top of a two-stream inflated convolutional neural network. Raw frames and optical flow of the video are processed in the first and second streams, respectively. We first segment the video frames in order to be able to track the video contents in more details and by using 3D CNNs extract spatial-temporal features, called local features. We then import the sequence of local features to the GRU network, and use a weighted averaging operator to aggregate the outcome of the two processing flows, called global features. The evaluations confirm acceptable results for the two HMDB51 and UCF101 datasets. The proposed method resulted in a 1.6% improvement in the classification accuracy of the HMDB51 challenging dataset compared to the best reported results.

Cite

CITATION STYLE

APA

Savadi Hosseini, M., & Ghaderi, F. (2020). A hybrid deep learning architecture using 3D CNNs and GRUs for human action recognition. International Journal of Engineering, Transactions B: Applications, 33(5), 959–965. https://doi.org/10.5829/IJE.2020.33.05B.29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free