Multi-view Consistent Generative Adversarial Networks for Compositional 3D-Aware Image Synthesis

18Citations
Citations of this article
57Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper studies compositional 3D-aware image synthesis for both single-object and multi-object scenes. We observe that two challenges remain in this field: existing approaches (1) lack geometry constraints and thus compromise the multi-view consistency of the single object, and (2) can not scale to multi-object scenes with complex backgrounds. To address these challenges coherently, we propose multi-view consistent generative adversarial networks (MVCGAN) for compositional 3D-aware image synthesis. First, we build the geometry constraints on the single object by leveraging the underlying 3D information. Specifically, we enforce the photometric consistency between pairs of views, encouraging the model to learn the inherent 3D shape. Second, we adapt MVCGAN to multi-object scenarios. In particular, we formulate the multi-object scene generation as a “decompose and compose” process. During training, we adopt the top-down strategy to decompose training images into objects and backgrounds. When rendering, we deploy a reverse bottom-up manner by composing the generated objects and background into the holistic scene. Extensive experiments on both single-object and multi-object datasets show that the proposed method achieves competitive performance for 3D-aware image synthesis.

Cite

CITATION STYLE

APA

Zhang, X., Zheng, Z., Gao, D., Zhang, B., Yang, Y., & Chua, T. S. (2023). Multi-view Consistent Generative Adversarial Networks for Compositional 3D-Aware Image Synthesis. International Journal of Computer Vision, 131(8), 2219–2242. https://doi.org/10.1007/s11263-023-01805-x

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free