A fusion framework of stereo vision and kinect for high-quality dense depth maps

7Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a fusion framework of stereo vision and Kinect for high-quality dense depth maps. The fusion problem is formulated as maximum a posteriori estimation of the Markov random field using the Bayes rule. We design a global energy function with a novel data term, which provides a reasonable, straight-forward and scalable way to fuse stereo vision and the depth data from Kinect. Particularly, visibility and pixelwise noises of the depth data from Kinect are taken into account in our fusion approach. Experimental results demonstrate effectiveness and accuracy of the proposed framework. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Wang, Y., & Jia, Y. (2013). A fusion framework of stereo vision and kinect for high-quality dense depth maps. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7729 LNCS, pp. 109–120). https://doi.org/10.1007/978-3-642-37484-5_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free