Extraction of three-dimensional information from reconstructions of in-line digital holograms

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Holography, the science of recording and reconstructing a complex electromagnetic wavefield, was invented by Gabor in 1948 [1]. This initial invention concerned itself with electron microscopy and predated the invention of the laser. With the onset of the laser E. Leith and J. Upatnieks [2,3] appended the holographic principle with the introduction of the offset reference wave. This enabled the separation of the object wavefield from the other components that are generated in the optical reconstruction process, namely the intensities of the object and reference wavefields, and the so-called "ghost" or conjugate image. Holography may also be employed to describe the science of optical interferometry [4], which incorporates important industrial measurement techniques. We note that holography is at the heart of countless optical and nonoptical techniques [5]. Using photosensitive recording materials to record holograms is costly and inflexible. Digital holography [6,7,8,9,10,11], refers to the science of using discrete electronic devices, such as CCDs, to record the hologram. In this case reconstruction is performed numerically by simulating the propagation of the wavefield back to the plane of the object. One major advantage of DH over material holography is the ability to use discrete signal processing techniques on the recorded signals [12,13,14,15]. In recent years DH has been demonstrated to be a useful method in many areas of optics such as microscopy [16], deformation analysis [17], object contouring [18], particle sizing and position measurement [19]. "In-line" or "on-axis" DH refers to the implementation of the original Gabor architecture in which the reference wavefield travels in the same direction as the object wavefield. As in the continuous case this method suffers from poor reconstructed image quality, due to the presence of the intensity terms and the conjugate image that contaminates the reconstructed object image. While it is possible to remove the intensity terms with efficient numerical techniques [20], it remains difficult to remove the conjugate image. This may be achieved using an off-axis recording setup equivalent to that used by Leith and Upatnieks [2,3]. However, this increases the spatial resolution requirements, and limits the system significantly which is undesirable when one considers the already limited resolution of digital cameras. An alternative approach known as phase-shifting interferometry [21] has been introduced which allows an in-line setup to be used with at least two successive captures and enables separation of the object wavefield from all of the other terms. A disadvantage of holographic reconstructions is the limited depth-of-field. When a digital hologram is reconstructed, a distance value d is input as a parameter to the reconstruction algorithm. Only object points that are located at the input distance d from the camera are in-focus in the reconstruction. Complex 3-D scenes, scenes containing multiple objects or containing multiple object features located at different depths, lead to reconstructions with large blurred regions. By applying focus measures to sets of reconstructions autofocus algorithms have been implemented on computer generated DHs [22] and those of microscopic objects [23,24]. In this chapter we develop an approach for the estimation of surface shape of macroscopic objects from digital holographic reconstructions using multiple independently focused images. We can estimate the focal plane of such a DH by maximizing a focus metric, such as variance, which is applied to the intensity of several 2-D reconstructions, where each reconstruction is at a different focal plane. Through the implementation of our depth-from-focus (DFF) technique we can create a depth map of the scene, and this depth information can then be used to perform tasks such as focused image creation[25], background segmentation [26] and object segmentation [27]. Using the segmentation masks output by our process we can segment different DH reconstruction planes into their individual objects. By numerically propagating a complex wavefront and superposing a second wavefront at a different plane we can create synthetic digital holograms of real-world objects. These can then be viewed on conventional three-dimensional displays [28].The structure of this chapter is as follows. In Sect. 15.2 we discuss the recording process for PSI DHs and our experimental setup. In Sect. 15.3 we introduce focus and focus detection for DHs. The algorithms for calculating a depth map using an overlapping DFF approach are discussed in detail. Section 15.4 presents a sequential discussion of our different data extraction algorithms, namely, (i) depth map extraction, (ii) extended focus image (EFI) creation, (iii) segmentation, and (iv) synthetic digital holographic scene creation, and we conclude in Sect. 15.5. © 2009 Springer-Verlag New York.

Cite

CITATION STYLE

APA

McElhinney, C. P., Hennelly, B. M., Javidi, B., & Naughton, T. J. (2009). Extraction of three-dimensional information from reconstructions of in-line digital holograms. In Three-Dimensional Imaging, Visualization, and Display (pp. 303–332). Springer US. https://doi.org/10.1007/978-0-387-79335-1_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free