Child action recognition in RGB and RGB-D data

4Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The paper presents an ongoing work that aims for real-time action recognition specifically tailored for child-centered research. To this end, we collected and annotated a dataset of 200 primary school children aged 6 to 11 years old. Each child was asked to perform seven actions: boxing, waving, clapping, running, jogging, walking towards the camera, and walking from side to side. Two camera perspectives are provided, with a top view in RGB format and a frontal view in both RGB and RGB-D formats. Body keypoints (skeleton data) are extracted using OpenPose and OpenNI tools. The results of this work are expected to bridge the performance gap between activity recognition systems for adults and children.

Cite

CITATION STYLE

APA

Turarova, A., Zhanatkyzy, A., Telisheva, Z., Sabyrov, A., & Sandygulova, A. (2020). Child action recognition in RGB and RGB-D data. In ACM/IEEE International Conference on Human-Robot Interaction (pp. 491–492). IEEE Computer Society. https://doi.org/10.1145/3371382.3378391

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free