In this contribution we focus on plenoptic scene modeling and rendering from long image sequences taken with a hand-held camera. The image sequence is calibrated with a structure-from-motion approach that considers the special viewing geometry of plenoptic scenes. By applying a stereo matching technique, dense depth maps are recovered locally for each viewpoint. View-dependent rendering can be accomplished by mapping all images onto a common plane of mean geometry and weighting them in dependence on the actual position of a virtual camera. To improve accuracy, approximating planes are defined locally in a hierarchical refinement process. Their pose is calculated from the local depth maps associated with each view without requiring a consistent global representation of scene geometry. Extensive experiments with ground truth data and hand-held sequences confirm performance and accuracy of our approach.
CITATION STYLE
Heigl, B., Koch, R., Pollefeys, M., Denzler, J., & Van Gool, L. (1999). Plenoptic Modeling and Rendering from Image Sequences Taken by a Hand-Held Camera (pp. 94–101). https://doi.org/10.1007/978-3-642-60243-6_11
Mendeley helps you to discover research relevant for your work.