Capturing Hands in Action Using Discriminative Salient Points and Physics Simulation

180Citations
Citations of this article
126Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Hand motion capture is a popular research field, recently gaining more attention due to the ubiquity of RGB-D sensors. However, even most recent approaches focus on the case of a single isolated hand. In this work, we focus on hands that interact with other hands or objects and present a framework that successfully captures motion in such interaction scenarios for both rigid and articulated objects. Our framework combines a generative model with discriminatively trained salient points to achieve a low tracking error and with collision detection and physics simulation to achieve physically plausible estimates even in case of occlusions and missing visual data. Since all components are unified in a single objective function which is almost everywhere differentiable, it can be optimized with standard optimization techniques. Our approach works for monocular RGB-D sequences as well as setups with multiple synchronized RGB cameras. For a qualitative and quantitative evaluation, we captured 29 sequences with a large variety of interactions and up to 150 degrees of freedom.

Cite

CITATION STYLE

APA

Tzionas, D., Ballan, L., Srikantha, A., Aponte, P., Pollefeys, M., & Gall, J. (2016). Capturing Hands in Action Using Discriminative Salient Points and Physics Simulation. International Journal of Computer Vision, 118(2), 172–193. https://doi.org/10.1007/s11263-016-0895-4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free