Human Action Recognition in Videos using a Robust CNN LSTM Approach

  • Orozco C
  • Xamena E
  • Buemi M
  • et al.
N/ACitations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Action recognition in videos is currently a topic of interest in the area of computer vision, due to potential applications such as: multimedia indexing, surveillance in public spaces, among others. In this paper we propose (1) Implement a CNN–LSTM architecture. First, a pre-trained VGG16 convolutional neural network extracts the features of the input video. Then, an LSTM classifies the video in a particular class. (2) Study how the number of LSTM units affects the performance of the system. To carry out the training and test phases, we used the KTH, UCF-11 and HMDB-51 datasets. (3) Evaluate the performance of our system using accuracy as evaluation metric. We obtain 93%, 91% and 47% accuracy respectively for each dataset.

Cite

CITATION STYLE

APA

Orozco, C. I., Xamena, E., Buemi, M. E., & Berlles, J. J. (2020). Human Action Recognition in Videos using a Robust CNN LSTM Approach. Ciencia y Tecnología, 21–34. https://doi.org/10.18682/cyt.vi0.3288

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free