Lightfield rendering allows fast visualization of complex sce- nes by view interpolation from images of densely spaced camera view- points. The lightfield data structure requires calibrated viewpoints, and rendering quality can be improved substantially when local scene depth is known for each viewpoint. In this contribution we propose to com- bine lightfield rendering with a geometry-based structure-from-motion approach that computes camera calibration and local depth estimates. The advantage of the combined approach w.r.t. a pure geometric structure recovery is that the estimated geometry need not be globally consistent but is updated locally depending on the rendering viewpoint. We concentrate on the viewpoint calibration that is computed directly from the image data by tracking image feature points. Ground-truth experiments on real lightfield sequences confirm the quality of calibration.
CITATION STYLE
Koch, R., Heigl, B., Pollefeys, M., Van Gool, L., & Niemann, H. (1999). A geometric approach to lightfield calibration. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1689, pp. 596–603). Springer Verlag. https://doi.org/10.1007/3-540-48375-6_71
Mendeley helps you to discover research relevant for your work.