Domain Transfer for 3D Pose Estimation from Color Images Without Manual Annotations

8Citations
Citations of this article
84Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We introduce a novel learning method for 3D pose estimation from color images. While acquiring annotations for color images is a difficult task, our approach circumvents this problem by learning a mapping from paired color and depth images captured with an RGB-D camera. We jointly learn the pose from synthetic depth images that are easy to generate, and learn to align these synthetic depth images with the real depth images. We show our approach for the task of 3D hand pose estimation and 3D object pose estimation, both from color images only. Our method achieves performances comparable to state-of-the-art methods on popular benchmark datasets, without requiring any annotations for the color images.

Cite

CITATION STYLE

APA

Rad, M., Oberweger, M., & Lepetit, V. (2019). Domain Transfer for 3D Pose Estimation from Color Images Without Manual Annotations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11365 LNCS, pp. 69–84). Springer Verlag. https://doi.org/10.1007/978-3-030-20873-8_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free