UnDEMoN: Unsupervised Deep Network for Depth and Ego-Motion Estimation

37Citations
Citations of this article
84Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a deep network based unsupervised visual odometry system for 6-DoF camera pose estimation and finding dense depth map for its monocular view. The proposed network is trained using unlabeled binocular stereo image pairs and is shown to provide superior performance in depth and ego-motion estimation compared to the existing state-of-the-art. This is achieved by introducing a novel objective function and training the network using temporally alligned sequences of monocular images. The objective function is based on the Charbonnier penalty applied to spatial and bi-directional temporal reconstruction losses. The overall novelty of the approach lies in the fact that the proposed deep framework combines a disparity-based depth estimation network with a pose estimation network to obtain absolute scale-aware 6-DoF camera pose and superior depth map. According to our knowledge, such a framework with complete unsupervised end-to-end learning has not been tried so far, making it a novel contribution in the field. The effectiveness of the approach is demonstrated through performance comparison with the state-of-the-art methods on KITTI driving dataset.

Cite

CITATION STYLE

APA

Madhu Babu, V., Das, K., Majumdar, A., & Kumar, S. (2018). UnDEMoN: Unsupervised Deep Network for Depth and Ego-Motion Estimation. In IEEE International Conference on Intelligent Robots and Systems (pp. 1082–1088). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/IROS.2018.8593864

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free