This paper describes an approach to capturing the appearance and structure of immersive environments based on the video imagery obtained with an omnidirectional camera system. The scheme proceeds by recovering the 3D positions of a set of point and line features in the world from image correspondences in a small set of key frames in the image sequence. Once the locations of these features have been recovered the position of the camera during every frame in the sequence can be determined by using these recovered features as ducials and estimating camera pose based on the location of corresponding image features in each frame. The end result of the procedure is an omnidirectional videosequence where every frame is augmented with its pose with respect to an absolute reference frame and a 3D model of the environment composed of point and line features in the scene. By augmenting the video clip with pose information we provide the viewer with the ability to navigate the image sequence in new and interesting ways. More speci-cally the user can use the pose information to travel through the video sequence with a trajectory different from the one taken by the original camera operator. This freedom presents the end user with an opportunity to immerse themselves within a remote environment.
CITATION STYLE
Taylor, C. J. (2001). Videoplus: A method for capturing the structure and appearance of immersive environments. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2018, pp. 170–186). Springer Verlag. https://doi.org/10.1007/3-540-45296-6_13
Mendeley helps you to discover research relevant for your work.