A mixed time-of-flight and stereoscopic camera system

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Several methods that combine range and color data have been investigated and successfully used in various applications. Most of these systems suffer from the problems of noise in the range data and resolution mismatch between the range sensor and the color cameras. High-resolution depth maps can be obtained using stereo matching, but this often fails to construct accurate depth maps of weakly/repetitively textured scenes. Range sensors provide coarse depth information regardless of presence/absence of texture. We propose a novel tof-stereo fusion method based on an efficient seed-growing algorithm which uses the tof data projected onto the stereo image pair as an initial set of correspondences. These initial “seeds” are then propagated to nearby pixels using a matching score that combines an image similarity criterion with rough depth priors computed from the low-resolution range data. The overall result is a dense and accurate depth map at the resolution of the color cameras at hand. We show that the proposed algorithm outperforms 2D image-based stereo algorithms and that the results are of higher resolution than off-the-shelf RGB-D sensors, e.g., Kinect.

Cite

CITATION STYLE

APA

Hansard, M., Lee, S., Choi, O., & Horaud, R. (2013). A mixed time-of-flight and stereoscopic camera system. In SpringerBriefs in Computer Science (Vol. 0, pp. 77–96). Springer. https://doi.org/10.1007/978-1-4471-4658-2_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free