Data fusion for photorealistic 3D models

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This study aims at building photorealistic 3D models of real-world objects. We discuss the problem of combining a 3D textureless model obtained by 3D scanner, with optical images that provide textural information of the object. Recently, we have proposed a novel method to register an uncalibrated image pair to a 3D surface model. After registration, the images are mapped to the surface. However, as the images show different parts of the objects, partial overlapping textures can only be extracted from them. Combining the images into a complete texture map that covers the entire object is not trivial. We present a method to build photorealistic 3D models that includes algorithms for data registration and for merging multiple texture maps using surface flattening. Experimental results on real and synthetic data are shown. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Jankó, Z., & Chetverikov, D. (2005). Data fusion for photorealistic 3D models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3691 LNCS, pp. 240–247). Springer Verlag. https://doi.org/10.1007/11556121_30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free