Stereo matching confidence learning based on multi-modal convolution neural networks

4Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In stereo matching, the correctness of stereo pairs matches, also called confidence, is used to improve the dense disparity estimation result. In this paper, we propose a multi-modal deep learning approach for stereo matching confidence estimation. To predict the confidence, we designed a Convolutional Neural Network (CNN), which is trained on image patches from multi-modal data, namely the source image pairs and initial disparity maps. To the best of our knowledge, this is the first approach reported in the literature combining multiple modality and patch based deep learning to predict the confidence. Furthermore, we explore and compare the confidence prediction ability of multiple modality data. Finally, we evaluate our network architecture on KITTI data sets. The experiments demonstrate that our multi-modal confidence network can achieve competitive results while compared with the state-of-the-art methods.

Cite

CITATION STYLE

APA

Fu, Z., Ardabilian, M., & Stern, G. (2019). Stereo matching confidence learning based on multi-modal convolution neural networks. In Communications in Computer and Information Science (Vol. 842, pp. 69–81). Springer Verlag. https://doi.org/10.1007/978-3-030-19816-9_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free