Specular-to-Diffuse Translation for Multi-view Reconstruction

11Citations
Citations of this article
112Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Most multi-view 3D reconstruction algorithms, especially when shape-from-shading cues are used, assume that object appearance is predominantly diffuse. To alleviate this restriction, we introduce S2Dnet, a generative adversarial network for transferring multiple views of objects with specular reflection into diffuse ones, so that multi-view reconstruction methods can be applied more effectively. Our network extends unsupervised image-to-image translation to multi-view “specular to diffuse” translation. To preserve object appearance across multiple views, we introduce a Multi-View Coherence loss (MVC) that evaluates the similarity and faithfulness of local patches after the view-transformation. In addition, we carefully design and generate a large synthetic training data set using physically-based rendering. During testing, our network takes only the raw glossy images as input, without extra information such as segmentation masks or lighting estimation. Results demonstrate that multi-view reconstruction can be significantly improved using the images filtered by our network.

Cite

CITATION STYLE

APA

Wu, S., Huang, H., Portenier, T., Sela, M., Cohen-Or, D., Kimmel, R., & Zwicker, M. (2018). Specular-to-Diffuse Translation for Multi-view Reconstruction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11208 LNCS, pp. 193–211). Springer Verlag. https://doi.org/10.1007/978-3-030-01225-0_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free