This contribution describes an automatic 3D surface modeling system that extracts dense metric 3D surfaces from an uncalibrated video sequence. A static 3D scene is observed from multiple viewpoints by freely moving a video camera around the object. No restrictions on camera movement and internal camera parameters like zoom are imposed, as the camera pose and intrinsic parameters are calibrated from the sequence. Dense surface reconstructions are obtained by first treating consecutive images of the sequence as stereoscopic pairs and computing dense disparity maps for all image pairs. All viewpoints are then linked by controlled correspondence linking for each image pixel. The correspondence linking algorithm allows for accurate depth estimation as well as image texture fusion from all viewpoints simultaneously. By keeping track of surface visibility and measurement uncertainty it can cope with occlusions and measurement outliers. The correspondence linking is applied to increase the robustness and geometrical resolution of surface depth as well as to remove highlights and specular reflections, and to create super-resolution texture maps for increased realism. The major impact of this work is the ability to automatically generate geometrically correct and visually pleasing 3D surface models from image sequences alone, which allows the economic model generation for a wide range of applications. The resulting textured 3D surface model are highly realistic VRML representations of the scene.
CITATION STYLE
Koch, R., Pollefeys, M., & Van Gool, L. (1998). Multi viewpoint stereo from uncalibrated video sequences. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1406, pp. 55–71). Springer Verlag. https://doi.org/10.1007/BFb0055659
Mendeley helps you to discover research relevant for your work.