SDFEst: Categorical Pose and Shape Estimation of Objects From RGB-D Using Signed Distance Fields

5Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Rich geometric understanding of the world is an important component of many robotic applications such as planning and manipulation. In this paper, we present a modular pipeline for pose and shape estimation of objects from RGB-D images given their category. The core of our method is a generative shape model, which we integrate with a novel initialization network and a differentiable renderer to enable 6D pose and shape estimation from a single or multiple views. We investigate the use of discretized signed distance fields as an efficient shape representation for fast analysis-by-synthesis optimization. Our modular framework enables multi-view optimization and extensibility. We demonstrate the benefits of our approach over state-of-the-art methods in several experiments on both synthetic and real data. We open-source our approach at https://github.com/roym899/sdfest.

Cite

CITATION STYLE

APA

Bruns, L., & Jensfelt, P. (2022). SDFEst: Categorical Pose and Shape Estimation of Objects From RGB-D Using Signed Distance Fields. IEEE Robotics and Automation Letters, 7(4), 9597–9604. https://doi.org/10.1109/LRA.2022.3189792

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free