Self-supervised learning shows great potential in monocular depth estimation, using image sequences as the only source of supervision. Although people try to use high-resolution image for depth estimation, the accuracy of prediction has not been significantly improved. In this work, we find the core reason comes from the inaccurate depth estimation in large gradient regions, making the bilinear interpolation error gradually disappear as the resolution increases. To obtain more accurate depth estimation in large gradient regions, it is necessary to obtain high-resolution features with spatial and semantic information. Therefore, we present an improved DepthNet, HR-Depth, with two effective strategies: (1) redesign the skip-connection in DepthNet to get better high-resolution features and (2) propose feature fusion Squeeze- and-Excitation(fSE) module to fuse feature more efficiently. Using Resnet-18 as the encoder, HR-Depth surpasses all previous state-of-the-art(SoTA) methods with the least parameters at both high and low resolution. Moreover, previous SoTA methods are based on fairly complex and deep networks with many parameters which limits their real applications. Thus we also construct a lightweight network which uses MobileNetV3 as encoder. Experiments show that the lightweight network can perform on par with many large models like Monodepth2 at high-resolution with only 20% parameters. All codes and models will be available at https://github.com/shawLyu/HR-Depth.
CITATION STYLE
Lyu, X., Liu, L., Wang, M., Kong, X., Liu, L., Liu, Y., … Yuan, Y. (2021). HR-Depth: High Resolution Self-Supervised Monocular Depth Estimation. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 3B, pp. 2294–2301). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i3.16329
Mendeley helps you to discover research relevant for your work.