Deep Depth from Focal Stack with Defocus Model for Camera-Setting Invariance

10Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We propose deep depth from focal stack (DDFS), which takes a focal stack as input of a neural network for estimating scene depth. Defocus blur is a useful cue for depth estimation. However, the size of the blur depends on not only scene depth but also camera settings such as focus distance, focal length, and f-number. Current learning-based methods without any defocus models cannot estimate a correct depth map if camera settings are different at training and test times. Our method takes a plane sweep volume as input for the constraint between scene depth, defocus images, and camera settings, and this intermediate representation enables depth estimation with different camera settings at training and test times. This camera-setting invariance can enhance the applicability of DDFS. The experimental results also indicate that our method is robust against a synthetic-to-real domain gap.

Cite

CITATION STYLE

APA

Fujimura, Y., Iiyama, M., Funatomi, T., & Mukaigawa, Y. (2024). Deep Depth from Focal Stack with Defocus Model for Camera-Setting Invariance. International Journal of Computer Vision, 132(6), 1970–1985. https://doi.org/10.1007/s11263-023-01964-x

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free