Modeling of 3D scene based on series of photographs taken with different depth-of-field

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper presents a method for fusing multifocus images into enhanced depth-of-field composite image and creating a 3D model of a photographed scene. A set of images of the same scene is taken from a typical digital camera with macro lenses with different depth-of-field. The method employs convolution and morphological filters to designate sharp regions in this set of images and combine them together into an image where all regions are properly focused. The presented method consists of several phases including: image registration, height map creation, image reconstruction and final 3D scene reconstruction. In result a 3D model of the photographed object is created. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Denkowski, M., Chlebiej, M., & Mikołajczak, P. (2008). Modeling of 3D scene based on series of photographs taken with different depth-of-field. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5102 LNCS, pp. 25–34). https://doi.org/10.1007/978-3-540-69387-1_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free