GENERATIVE ADVERSARIAL NETWORKS for SINGLE PHOTO 3D RECONSTRUCTION

27Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.

Abstract

Fast but precise 3D reconstructions of cultural heritage scenes are becoming very requested in the archaeology and architecture. While modern multi-image 3D reconstruction approaches provide impressive results in terms of textured surface models, it is often the need to create a 3D model for which only a single photo (or few sparse) is available. This paper focuses on the single photo 3D reconstruction problem for lost cultural objects for which only a few images are remaining. We use image-to-voxel translation network (Z-GAN) as a starting point. Z-GAN network utilizes the skip connections in the generator network to transfer 2D features to a 3D voxel model effectively (Figure 1). Therefore, the network can generate voxel models of previously unseen objects using object silhouettes present on the input image and the knowledge obtained during a training stage. In order to train our Z-GAN network, we created a large dataset that includes aligned sets of images and corresponding voxel models of an ancient Greek temple. We evaluated the Z-GAN network for single photo reconstruction on complex structures like temples as well as on lost heritage still available in crowdsourced images. Comparison of the reconstruction results with state-of-the-art methods are also presented and commented.

Cite

CITATION STYLE

APA

Kniaz, V. V., Remondino, F., & Knyaz, V. A. (2019). GENERATIVE ADVERSARIAL NETWORKS for SINGLE PHOTO 3D RECONSTRUCTION. In ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (Vol. 42, pp. 403–408). Copernicus GmbH. https://doi.org/10.5194/isprs-archives-XLII-2-W9-403-2019

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free