Estimating scene depth, predicting camera motion and localizing dynamic objects from monocular videos are fundamental but challenging research topics in computer vision. Deep learning has demonstrated an amazing performance for these tasks recently. This article presents a novel unsupervised deep learning framework for scene depth estimation, camera motion prediction and dynamic object localization from videos. Consecutive stereo image pairs are used to train the system while only monocular images are needed for inference. The supervisory signals for the training stage come from various forms of image synthesis. Due to the use of consecutive stereo video, both spatial and temporal photometric errors are used to synthesize the images. Furthermore, to relieve the impacts of occlusions, adaptive left-right consistency and forward-backward consistency losses are added to the objective function. Experimental results on the KITTI and Cityscapes datasets demonstrate that our method is more effective in depth estimation, camera motion prediction and dynamic object localization compared to previous models.
CITATION STYLE
Yang, D., Zhong, X., Gu, D., Peng, X., Yang, G., & Zou, C. (2020). Unsupervised learning of depth estimation, camera motion prediction and dynamic object localization from video. International Journal of Advanced Robotic Systems, 17(2). https://doi.org/10.1177/1729881420909653
Mendeley helps you to discover research relevant for your work.