In this paper, we present a novel approach for 3D hand tracking in real-time from a set of depth images. In each frame, our approach initializes hand pose with learning and then jointly optimizes the hand pose and shape. For pose initialization, we propose a gesture classification and root location network (GCRL), which can capture the meaningful topological structure of the hand to estimate the gesture and root location of the hand. With the per-frame initialization, our approach can rapidly recover from tracking failures. For optimization, unlike most existing methods that have been using a fixed-size hand model or manual calibration, we propose a hand gesture-guided optimization strategy to estimate pose and shape iteratively, which makes the tracking results more accuracy. Experiments on three challenging datasets show that our proposed approach achieves similar accuracy as state-of-the-art approaches, while runs on a low computational resource (without GPU).
CITATION STYLE
Che, Y., & Qi, Y. (2020). Embedding Gesture Prior to Joint Shape Optimization Based Real-Time 3D Hand Tracking. IEEE Access, 8, 34204–34214. https://doi.org/10.1109/ACCESS.2020.2974551
Mendeley helps you to discover research relevant for your work.