MonoComb: A Sparse-To-Dense Combination Approach for Monocular Scene Flow

2Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Contrary to the ongoing trend in automotive applications towards usage of more diverse and more sensors, this work tries to solve the complex scene flow problem under a monocular camera setup, i.e. using a single sensor. Towards this end, we exploit the latest achievements in single image depth estimation, optical flow, and sparse-To-dense interpolation and propose a monocular combination approach (MonoComb) to compute dense scene flow. MonoComb uses optical flow to relate reconstructed 3D positions over time and interpolates occluded areas. This way, existing monocular methods are outperformed in dynamic foreground regions which leads to the second best result among the competitors on the challenging KITTI 2015 scene flow benchmark.

Cite

CITATION STYLE

APA

Schuster, R., Stricker, D., & Unger, C. (2020). MonoComb: A Sparse-To-Dense Combination Approach for Monocular Scene Flow. In Proceedings - CSCS 2020: ACM Computer Science in Cars Symposium. Association for Computing Machinery, Inc. https://doi.org/10.1145/3385958.3430473

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free