A Deep Learning Approach for Human Action Recognition Using Skeletal Information

8Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we present an approach toward human action detection for activities of daily living (ADLs) that uses a convolutional neural network (CNN). The network is trained on discrete Fourier transform (DFT) images that result from raw sensor readings, i.e., each human action is ultimately described by an image. More specifically, we work using 3D skeletal positions of human joints, which originate from processing of raw RGB sequences enhanced by depth information. The motion of each joint may be described by a combination of three 1D signals, representing its coefficients into a 3D Euclidean space. All such signals from a set of human joints are concatenated to form an image, which is then transformed by DFT and is used for training and evaluation of a CNN. We evaluate our approach using a publicly available challenging dataset of human actions that may involve one or more body parts simultaneously and for two sets of actions which resemble common ADLs.

Cite

CITATION STYLE

APA

Mathe, E., Maniatis, A., Spyrou, E., & Mylonas, P. (2020). A Deep Learning Approach for Human Action Recognition Using Skeletal Information. In Advances in Experimental Medicine and Biology (Vol. 1194, pp. 105–114). Springer. https://doi.org/10.1007/978-3-030-32622-7_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free