Key frame extraction and classification of human activities using motion energy

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of the imminent challenges for assistive robots in learning human activities while observing a human perform a task is how to define movement representations (states). This has been recently explored for improved solutions. This paper proposes a method of extracting key frames (or poses) of human activities from skeleton joint coordinates information obtained using an RGB-D Camera (Depth Sensor). The motion energy (kinetic energy) of each pose in an activity sequence is computed and a novel approach is proposed for extracting key pose locations that define an activity using moving average crossovers of computed pose kinetic energy. This is important as not all frames of an activity sequence are key in defining the activity. In order to evaluate the reliability of extracted key poses, Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) which is capable to learn a sequence of transition from states in an activity is applied in classifying activities from identified key poses. This is important for assistive robots to identify key human poses and states transition in order to correctly carry out human activities. Some preliminary experimental results are presented to illustrate the proposed methodology.

Cite

CITATION STYLE

APA

Adama, D. A., Lotfi, A., & Langensiepen, C. (2019). Key frame extraction and classification of human activities using motion energy. In Advances in Intelligent Systems and Computing (Vol. 840, pp. 303–311). Springer Verlag. https://doi.org/10.1007/978-3-319-97982-3_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free