The perception of persons is an important capability of today's robots that work closely together with humans. An operator may use, for example, gestures to refer to an object in the environment. In order to perceive such gestures, the robot has to estimate the body pose of the operator. We focus on the marker-less motion capture of a human body by means of an Iterative Closest Point (ICP) algorithm for articulated structures. An articulated upper body model is aligned with the depth measurements of an RGB-D camera. Due to the variability of the human body, we propose an adaptive body model that is aligned within the sensor data and iteratively adjusted to the person's body dimensions. Additionally, we preserve consistency with respect to self-collisions. Besides that, we use an inverse data assignment, that is particularly utile for articulated models. Experiments with measurements of a Microsoft Kinect camera show the advantage of the approach compared to the standard articulated ICP algorithm in terms of the root mean squared (RMS) error and the number of iterations the algorithm needs to converge. In addition, we show that our consistency checks enable to recover from situations where the standard algorithm fails. © 2011 Springer-Verlag.
CITATION STYLE
Droeschel, D., & Behnke, S. (2011). 3D body pose estimation using an adaptive person model for articulated ICP. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7102 LNAI, pp. 157–167). https://doi.org/10.1007/978-3-642-25489-5_16
Mendeley helps you to discover research relevant for your work.