SinMPI: Novel View Synthesis from a Single Image with Expanded Multiplane Images

5Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Single-image novel view synthesis is a challenging and ongoing problem that aims to generate an infinite number of consistent views from a single input image. Although significant efforts have been made to advance the quality of generated novel views, less attention has been paid to the expansion of the underlying scene representation, which is crucial to the generation of realistic novel view images. This paper proposes SinMPI, a novel method that uses an expanded multiplane image (MPI) as the 3D scene representation to significantly expand the perspective range of MPI and generate high-quality novel views from a large multiplane space. The key idea of our method is to use Stable Diffusion [Rombach et al. 2021] to generate out-of-view contents, project all scene contents into an expanded multiplane image according to depths predicted by monocular depth estimators, and then optimize the multiplane image under the supervision of pseudo multi-view data generated by a depth-aware warping and inpainting module. Both qualitative and quantitative experiments have been conducted to validate the superiority of our method to the state of the art. Our code and data are available at https://github.com/TrickyGo/SinMPI.

Cite

CITATION STYLE

APA

Pu, G., Wang, P. S., & Lian, Z. (2023). SinMPI: Novel View Synthesis from a Single Image with Expanded Multiplane Images. In Proceedings - SIGGRAPH Asia 2023 Conference Papers, SA 2023. Association for Computing Machinery, Inc. https://doi.org/10.1145/3610548.3618155

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free