Inferring depth from a pair of images captured using different aperture settings

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Given two pictures of the same scene captured using the same camera and the same lens, the one captured with a large aperture will appear partially blurred while the other captured with a small aperture will appear totally sharp. This paper investigates two possible ways of inferring depth of the scene from such an image pair with the constraint that both pictures are focused on the closest point of the scene. Our first method uses a series of Gaussian kernels to blur the image pair, and in the second method, the image pair will be shrunk to a series of smaller dimensions. In both methods, sharp areas in both images will always stay similar to each other, whereas the areas that appear sharp in one image but blurred in the other will not be similar until they are blurred using a large Gaussian kernel or shrunk to small dimensions. This observation enables us to roughly tell which objects in the scene are closer to us and which ones are farther away. At the end of this paper, we will discuss the limitations of our proposed approaches and some of the directions for future work. © Springer-Verlag 2013.

Cite

CITATION STYLE

APA

Li, Y., Au, O. C., Xu, L., Sun, W., & Hu, W. (2013). Inferring depth from a pair of images captured using different aperture settings. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7733 LNCS, pp. 187–193). https://doi.org/10.1007/978-3-642-35728-2_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free