Scene and motion reconstruction from defocused and motion-blurred images via anisotropie diffusion

22Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a solution to the problem of inferring the depth map, radiance and motion of a scene from a collection of motionblurred and defocused images. We model motion-blur and defocus as an anisotropic diffusion process, whose initial conditions depend on the radiance and whose diffusion tensor encodes the shape of the scene, the motion field and the optics parameters. We show that this model is wellposed and propose an efficient algorithm to infer the unknowns of the model. Inference is performed by minimizing the discrepancy between the measured blurred images and the ones synthesized via forward diffusion. Since the problem is ill-posed, we also introduce additional Tikhonov regularization terms. The resulting method is fast and robust to noise as shown by experiments with both synthetic and real data. © Springer-Verlag 2004.

Cite

CITATION STYLE

APA

Favaro, P., Burger, M., & Soatto, S. (2004). Scene and motion reconstruction from defocused and motion-blurred images via anisotropie diffusion. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3021, 257–269. https://doi.org/10.1007/978-3-540-24670-1_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free