Self-Supervised Category-Level 6D Object Pose Estimation with Deep Implicit Shape Representation

27Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

Category-level 6D pose estimation can be better generalized to unseen objects in a category compared with instance-level 6D pose estimation. However, existing category-level 6D pose estimation methods usually require supervised training with a sufficient number of 6D pose annotations of objects which makes them difficult to be applied in real scenarios. To address this problem, we propose a self-supervised framework for category-level 6D pose estimation in this paper. We leverage DeepSDF as a 3D object representation and design several novel loss functions based on DeepSDF to help the self-supervised model predict unseen object poses without any 6D object pose labels and explicit 3D models in real scenarios. Experiments demonstrate that our method achieves comparable performance with the state-of-the-art fully supervised methods on the category-level NOCS benchmark.

Cite

CITATION STYLE

APA

Peng, W., Yan, J., Wen, H., & Sun, Y. (2022). Self-Supervised Category-Level 6D Object Pose Estimation with Deep Implicit Shape Representation. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (Vol. 36, pp. 2082–2090). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v36i2.20104

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free