Acquiring, stitching and blending diffuse appearance attributes on 3D models

55Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A new system for the construction of highly realistic models of real free-form 3D objects is proposed, based on the integration of several techniques (automatic 3D scanning, inverse illumination, inverse texture-mapping and textured 3D graphics). Our system improves the quality of a 3D model (e.g. acquired with a range scanning device) by adding color detail and, if required, high-frequency shape detail. Detail is obtained by processing a set of digital photographs of the object. This is carried out by performing several subtasks: to compute camera calibration and position, to remove illumination effects obtaining both illumination-invariant reflectance properties and a high-resolution surface normal field, and finally to blend and stitch the acquired detail on the triangle mesh via standard texture mapping. In particular, the smooth join between different images that map on adjacent sections of the surface is obtained by applying an accurate piecewise local registration of the original images and by blending textures. For each mesh face which is on the adjacency border between different observed images, a corresponding triangular texture patch can also be resampled as a weighted blend of the corresponding adjacent image sections. Examples of the results obtained with sample works of art are presented and discussed.

Cite

CITATION STYLE

APA

Rocchini, C., Cignoni, P., Montani, C., & Scopigno, R. (2002). Acquiring, stitching and blending diffuse appearance attributes on 3D models. Visual Computer, 18(3), 186–204. https://doi.org/10.1007/s003710100146

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free