Multi-view to Novel View: Synthesizing Novel Views With Self-learned Confidence

17Citations
Citations of this article
149Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we address the task of multi-view novel view synthesis, where we are interested in synthesizing a target image with an arbitrary camera pose from given source images. We propose an end-to-end trainable framework that learns to exploit multiple viewpoints to synthesize a novel view without any 3D supervision. Specifically, our model consists of a flow prediction module and a pixel generation module to directly leverage information presented in source views as well as hallucinate missing pixels from statistical priors. To merge the predictions produced by the two modules given multi-view source images, we introduce a self-learned confidence aggregation mechanism. We evaluate our model on images rendered from 3D object models as well as real and synthesized scenes. We demonstrate that our model is able to achieve state-of-the-art results as well as progressively improve its predictions when more source images are available.

Cite

CITATION STYLE

APA

Sun, S. H., Huh, M., Liao, Y. H., Zhang, N., & Lim, J. J. (2018). Multi-view to Novel View: Synthesizing Novel Views With Self-learned Confidence. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11207 LNCS, pp. 162–178). Springer Verlag. https://doi.org/10.1007/978-3-030-01219-9_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free