3DPointCaps++: Learning 3D Representations with Capsule Networks

3Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

We present 3DPointCaps++ for learning robust, flexible and generalizable 3D object representations without requiring heavy annotation efforts or supervision. Unlike conventional 3D generative models, our algorithm aims for building a structured latent space where certain factors of shape variations, such as object parts, can be disentangled into independent sub-spaces. Our novel decoder then acts on these individual latent sub-spaces (i.e. capsules) using deconvolution operators to reconstruct 3D points in a self-supervised manner. We further introduce a cluster loss ensuring that the points reconstructed by a single capsule remain local and do not spread across the object uncontrollably. These contributions allow our network to tackle the challenging tasks of part segmentation, part interpolation/replacement as well as correspondence estimation across rigid / non-rigid shape, and across / within category. Our extensive evaluations on ShapeNet objects and human scans demonstrate that our network can learn generic representations that are robust and useful in many applications.

Cite

CITATION STYLE

APA

Zhao, Y., Fang, G., Guo, Y., Guibas, L., Tombari, F., & Birdal, T. (2022). 3DPointCaps++: Learning 3D Representations with Capsule Networks. International Journal of Computer Vision, 130(9), 2321–2336. https://doi.org/10.1007/s11263-022-01632-6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free