Predicting accurately and in real-time 3D body joint positions from a depth image is the cornerstone for many safety, biomedical, and entertainment applications. Despite the high quality of the depth images, the accuracy of existing human pose estimation methods from single depth images remains insufficient for some applications. In order to enhance the accuracy, we suggest to leverage a rough orientation estimation to dynamically select a 3D joint position prediction model specialized for this orientation. This orientation estimation can be obtained in real-time either from the image itself, or from any other clue like tracking. We demonstrate the merits of this general principle on a pose estimation method similar to the one used with Kinect cameras. Our results show that the accuracy is improved by up to 45.1%, with respect to a method using the same model for all orientations.
CITATION STYLE
Azrour, S., Piérard, S., & Van Droogenbroeck, M. (2016). Leveraging orientation knowledge to enhance human pose estimation methods. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9756, pp. 81–87). Springer Verlag. https://doi.org/10.1007/978-3-319-41778-3_8
Mendeley helps you to discover research relevant for your work.