First-person palm pose tracking and gesture recognition in augmented reality

4Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present an Augmented Reality solution to allow users to manipulate and inspect 3D virtual objects freely with their bare hands on wearable devices. To this end, we use a head-mounted depth camera to capture the RGB-D hand images from egocentric view, and propose a unified framework to jointly recover the 6D palm pose and recognize the hand gesture from the depth images. The random forest is utilized to regress for the palm pose and classify the hand gesture simultaneously via a spatial-voting framework. With a real-world annotated training dataset, the proposed method shows to predict the palm pose and gesture accurately. The output of the forest is used to render the 3D virtual objects, which are overlaid onto the hand region in input RGB images with camera calibration parameters to provide seamless virtual and real scene synthesis.

Cite

CITATION STYLE

APA

Thalmann, D., Liang, H., & Yuan, J. (2016). First-person palm pose tracking and gesture recognition in augmented reality. Communications in Computer and Information Science, 598, 3–15. https://doi.org/10.1007/978-3-319-29971-6_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free