Reconstructing Hand-Held Objects from Monocular Video

9Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents an approach that reconstructs a hand-held object from a monocular video. In contrast to many recent methods that directly predict object geometry by a trained network, the proposed approach does not require any learned prior about the object and is able to recover more accurate and detailed object geometry. The key idea is that the hand motion naturally provides multiple views of the object and the motion can be reliably estimated by a hand pose tracker. Then, the object geometry can be recovered by solving a multi-view reconstruction problem. We devise an implicit neural representation-based method to solve the reconstruction problem and address the issues of imprecise hand pose estimation, relative hand-object motion, and insufficient geometry optimization for small objects. We also provide a newly collected dataset with 3D ground truth to validate the proposed approach. The dataset and code will be released at https://dihuangdh.github.io/hhor.

Cite

CITATION STYLE

APA

Huang, D., Ji, X., He, X., Sun, J., He, T., Shuai, Q., … Zhou, X. (2022). Reconstructing Hand-Held Objects from Monocular Video. In Proceedings - SIGGRAPH Asia 2022 Conference Papers. Association for Computing Machinery, Inc. https://doi.org/10.1145/3550469.3555401

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free