Plenoptic Modeling and Rendering from Image Sequences Taken by a Hand-Held Camera

  • Heigl B
  • Koch R
  • Pollefeys M
  • et al.
N/ACitations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this contribution we focus on plenoptic scene modeling and rendering from long image sequences taken with a hand-held camera. The image sequence is calibrated with a structure-from-motion approach that considers the special viewing geometry of plenoptic scenes. By applying a stereo matching technique, dense depth maps are recovered locally for each viewpoint. View-dependent rendering can be accomplished by mapping all images onto a common plane of mean geometry and weighting them in dependence on the actual position of a virtual camera. To improve accuracy, approximating planes are defined locally in a hierarchical refinement process. Their pose is calculated from the local depth maps associated with each view without requiring a consistent global representation of scene geometry. Extensive experiments with ground truth data and hand-held sequences confirm performance and accuracy of our approach.

Cite

CITATION STYLE

APA

Heigl, B., Koch, R., Pollefeys, M., Denzler, J., & Van Gool, L. (1999). Plenoptic Modeling and Rendering from Image Sequences Taken by a Hand-Held Camera (pp. 94–101). https://doi.org/10.1007/978-3-642-60243-6_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free