This paper presents a method for fusing multifocus images into enhanced depth-of-field composite image and creating a 3D model of a photographed scene. A set of images of the same scene is taken from a typical digital camera with macro lenses with different depth-of-field. The method employs convolution and morphological filters to designate sharp regions in this set of images and combine them together into an image where all regions are properly focused. The presented method consists of several phases including: image registration, height map creation, image reconstruction and final 3D scene reconstruction. In result a 3D model of the photographed object is created. © 2008 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Denkowski, M., Chlebiej, M., & Mikołajczak, P. (2008). Modeling of 3D scene based on series of photographs taken with different depth-of-field. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5102 LNCS, pp. 25–34). https://doi.org/10.1007/978-3-540-69387-1_4
Mendeley helps you to discover research relevant for your work.