Abstract
Many of the actions that we take with our hands involve self-contact and occlusion: shaking hands, making a fist, or interlacing our fingers while thinking. This use of of our hands illustrates the importance of tracking hands through self-contact and occlusion for many applications in computer vision and graphics, but existing methods for tracking hands and faces are not designed to treat the extreme amounts of self-contact and self-occlusion exhibited by common hand gestures. By extending recent advances in vision-based tracking and physically based animation, we present the first algorithm capable of tracking high-fidelity hand deformations through highly self-contacting and self-occluding hand gestures, for both single hands and two hands. By constraining a vision-based tracking algorithm with a physically based deformable model, we obtain an algorithm that is robust to the ubiquitous self-interactions and massive self-occlusions exhibited by common hand gestures, allowing us to track two hand interactions and some of the most difficult possible configurations of a human hand.
Author supplied keywords
Cite
CITATION STYLE
Smith, B., Wu, C., Wen, H., Peluse, P., Sheikh, Y., Hodgins, J. K., & Shiratori, T. (2020). Constraining dense hand surface tracking with elasticity. ACM Transactions on Graphics, 39(6). https://doi.org/10.1145/3414685.3417768
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.