We propose a novel approach to generate an arbitrary in-between stereoscopic view from a wide-baseline stereo camera using view morphing. Conventionally, a stereoscopic view for a real scene has been generated by a stereo camera which simulats the human eye configuration with approximately 65mm horizontal separation. Such a configuration, however, provides a fixed viewpoint and a depth feeling wholly depending on the camera pose. In this work, we use a wider-baseline stereo camera than that of conventional one to increase flexibility both for viewpoints and the degree of depth feeling. View morphing is a shape-preserving transition method from a source to a destination view. We can adapte this method to choose locations of two virtual cameras yielding an in-between stereoscopic view. We can control the degree of depth feeling by choosing a different distance between two virtual cameras to provide customized depth feeling. Experimental results show a series of synthesized in-between stereoscopic views generated from Middlebury stereo data set. We also show interlaced stereo composition results using a pair of synthesized views having a 65mm- and 130mm-baseline from input views acquired by a 160mm-baseline stereo camera. © 2008 Springer Berlin Heidelberg.
CITATION STYLE
Rhee, S. M., Choi, J., & Neumann, U. (2008). Stereoscopic view synthesis by view morphing. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5359 LNCS, pp. 924–933). https://doi.org/10.1007/978-3-540-89646-3_92
Mendeley helps you to discover research relevant for your work.