Frankenstein: Generating Semantic-Compositional 3D Scenes in One Tri-Plane

7Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present Frankenstein, a diffusion-based framework that can generate semantic-compositional 3D scenes in a single pass. Unlike existing methods that output a single, unified 3D shape, Frankenstein simultaneously generates multiple separated shapes, each corresponding to a semantically meaningful part. The 3D scene information is encoded in one single triplane tensor, from which multiple Signed Distance Function (SDF) fields can be decoded to represent the compositional shapes. During training, an auto-encoder compresses tri-planes into a latent space, and then the denoising diffusion process is employed to approximate the distribution of the compositional scenes. Frankenstein demonstrates promising results in generating room interiors as well as human avatars with automatically separated parts. The generated scenes facilitate many downstream applications, such as part-wise re-texturing, object rearrangement in the room or avatar cloth re-targeting.

Cite

CITATION STYLE

APA

Yan, H., Li, Y., Wu, Z., Chen, S., Sun, W., Shang, T., … Ji, P. (2024). Frankenstein: Generating Semantic-Compositional 3D Scenes in One Tri-Plane. In Proceedings - SIGGRAPH Asia 2024 Conference Papers, SA 2024. Association for Computing Machinery, Inc. https://doi.org/10.1145/3680528.3687672

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free