Skip to main content

Interaction fusion: Real-time reconstruction of hand poses and deformable objects in hand-object interactions

7Citations
Citations of this article
41Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Hand-object interaction is challenging to reconstruct but important for many applications like HCI, robotics and so on. Previous works focus on either the hand or the object while we jointly track the hand poses, fuse the 3D object model and reconstruct its rigid and nonrigid motions, and perform all these tasks in real time. To achieve this,we first use a DNN to segment the hand and object in the two input depth streams and predict the current hand pose based on the previous poses by a pre-trained LSTM network. With this information, a unified optimization framework is proposed to jointly track the hand poses and object motions. The optimization integrates the segmented depth maps, the predicted motion, a spatial-temporal varying rigidity regularizer and a real-time contact constraint. A nonrigid fusion technique is further involved to reconstruct the object model. Experiments demonstrate that our method can solve the ambiguity caused by heavy occlusions between hand and object, and generate accurate results for various objects and interacting motions.

Cite

CITATION STYLE

APA

Zhang, H., Bo, Z. H., Yong, J. H., & Xu, F. (2019). Interaction fusion: Real-time reconstruction of hand poses and deformable objects in hand-object interactions. ACM Transactions on Graphics, 38(4). https://doi.org/10.1145/3306346.3322998

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free