Depth estimation and inpainting with an unconstrained camera

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

Unrestricted camera motion and the ability to operate over a range of lens parameters are often desirable when using an off-the-shelf camera. Variations in intrinsic and extrinsic parameters induce defocus and pixel motion, both of which relate to scene structure. We propose a depth estimation approach by elegantly coupling the motion and defocus cues. We further advocate a natural extension of our framework for inpainting both depth and image, using the motion cue. Unlike traditional inpainting, our approach also considers defocus blur. This ensures that the image inpainting is coherent with respect to defocus. We use the belief propagation method in our estimation approach, which also handles occlusions and uses the color image segmentation cue. © 2010. The copyright of this document resides with its authors.

Cite

CITATION STYLE

APA

Bhavsar, A. V., & Rajagopalan, A. N. (2010). Depth estimation and inpainting with an unconstrained camera. In British Machine Vision Conference, BMVC 2010 - Proceedings. British Machine Vision Association, BMVA. https://doi.org/10.5244/C.24.84

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free