This study aims at building photorealistic 3D models of real-world objects. We discuss the problem of combining a 3D textureless model obtained by 3D scanner, with optical images that provide textural information of the object. Recently, we have proposed a novel method to register an uncalibrated image pair to a 3D surface model. After registration, the images are mapped to the surface. However, as the images show different parts of the objects, partial overlapping textures can only be extracted from them. Combining the images into a complete texture map that covers the entire object is not trivial. We present a method to build photorealistic 3D models that includes algorithms for data registration and for merging multiple texture maps using surface flattening. Experimental results on real and synthetic data are shown. © Springer-Verlag Berlin Heidelberg 2005.
CITATION STYLE
Jankó, Z., & Chetverikov, D. (2005). Data fusion for photorealistic 3D models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3691 LNCS, pp. 240–247). Springer Verlag. https://doi.org/10.1007/11556121_30
Mendeley helps you to discover research relevant for your work.