Synchronized video and motion capture dataset and quantitative evaluation of vision based skeleton tracking methods for robotic action imitation

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Marker-less skeleton tracking methods are being widely used for applications such as computer animation, human action recognition, human robot collaboration and humanoid robot motion control. Regarding robot motion control, using the humanoid’s 3D camera and a robust and accurate tracking algorithm, vision based tracking could be a wise solution. In this paper we quantitatively evaluate two vision based marker-less skeleton tracking algorithms (the first, Igalia’s Skeltrack skeleton tracking and the second, an adaptable and customizable method which combines color and depth information from the Kinect.) and perform comparative analysis on upper body tracking results. We have generated a common dataset of human motions by synchronizing an XSENS 3D Motion Capture System, which is used as a ground truth data and a video recording from a 3D sensor device. The dataset, could also be used to evaluate other full body skeleton tracking algorithms. In addition, sets of evaluation metrics are presented.

Cite

CITATION STYLE

APA

Atnafu, S., & Nicola, C. (2018). Synchronized video and motion capture dataset and quantitative evaluation of vision based skeleton tracking methods for robotic action imitation. In Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST (Vol. 244, pp. 150–158). Springer Verlag. https://doi.org/10.1007/978-3-319-95153-9_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free