Head pose is one of the important human cues in social navigation planning for robots to coexist with humans. Inferring such information from distant targets using a mobile platform is a challenging task. This paper tackles this issue to propose a method for detecting and tracking head pose with the mentioned constraints using RGBD camera (Kinect, Microsoft). Initially possible human regions are segmented out then validated by using depth and Hu moment features. Next, plausible head regions within the segmented areas are estimated by employing Haar-like features with the Adaboost classifier. Finally, the obtained head regions are post-validated by means of their dimension and their probability of containing skin before refining the pose estimation and tracking by a boosted-based particle filter. Experimental results demonstrate the feasibility of the proposed approach for detecting and tracking head pose from far range targets under spot-light and natural illumination conditions. © 2011 Springer-Verlag.
CITATION STYLE
Tomari, R., Kobayashi, Y., & Kuno, Y. (2011). Multi-view head detection and tracking with long range capability for social navigation planning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6939 LNCS, pp. 418–427). https://doi.org/10.1007/978-3-642-24031-7_42
Mendeley helps you to discover research relevant for your work.