Solving the depth ambiguity in single-perspective images

  • El Helou M
  • Shahpaski M
  • Süsstrunk S
1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Scene depth estimation is gaining importance as more and more AR/VR and robot vision applications are developed. Conventional depth-from-defocus techniques can passively provide depth maps from a single image. This is especially advantageous for moving scenes. However, they suffer a depth ambiguity problem where two distinct depth planes can have the same amount of defocus blur in the captured image. We solve the ambiguity problem and, as a consequence, introduce a passive technique that provides a one-to-one mapping between depth and defocus blur. Our method relies on the fact that the relationship between defocus blur and depth is also wavelength dependent. The depth ambiguity is thus solved by leveraging (multi-) spectral information. Specifically, we analyze the difference in defocus blur of two channels to obtain different scene depth regions. This paper provides the derivation of our solution, a robustness analysis, and validation on consumer lenses.

Cite

CITATION STYLE

APA

El Helou, M., Shahpaski, M., & Süsstrunk, S. (2019). Solving the depth ambiguity in single-perspective images. OSA Continuum, 2(10), 2901. https://doi.org/10.1364/osac.2.002901

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free