Unsupervised learning of depth estimation, camera motion prediction and dynamic object localization from video

2Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Estimating scene depth, predicting camera motion and localizing dynamic objects from monocular videos are fundamental but challenging research topics in computer vision. Deep learning has demonstrated an amazing performance for these tasks recently. This article presents a novel unsupervised deep learning framework for scene depth estimation, camera motion prediction and dynamic object localization from videos. Consecutive stereo image pairs are used to train the system while only monocular images are needed for inference. The supervisory signals for the training stage come from various forms of image synthesis. Due to the use of consecutive stereo video, both spatial and temporal photometric errors are used to synthesize the images. Furthermore, to relieve the impacts of occlusions, adaptive left-right consistency and forward-backward consistency losses are added to the objective function. Experimental results on the KITTI and Cityscapes datasets demonstrate that our method is more effective in depth estimation, camera motion prediction and dynamic object localization compared to previous models.

Cite

CITATION STYLE

APA

Yang, D., Zhong, X., Gu, D., Peng, X., Yang, G., & Zou, C. (2020). Unsupervised learning of depth estimation, camera motion prediction and dynamic object localization from video. International Journal of Advanced Robotic Systems, 17(2). https://doi.org/10.1177/1729881420909653

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free