Besides airborne laser bathymetry and multimedia photogrammetry, spectrally derived bathymetry provides a third optical method for deriving water depths. In this paper, we introduce BathyNet, an U-net like convolutional neural network, based on high-resolution, multispectral RGBC (red, green, blue, coastal blue) aerial images. The approach combines photogrammetric and radiometric methods: Preprocessing of the raw aerial images relies on strict ray tracing of the potentially oblique image rays, considering the intrinsic and extrinsic camera parameters. The actual depth estimation exploits the radiometric image content in a deep learning framework. 3D water surface and water bottom models derived from simultaneously captured laser bathymetry point clouds serve as reference and training data for both image preprocessing and actual depth estimation. As such, the approach highlights the benefits of jointly processing data from hybrid active and passive imaging sensors. The RGBC images and laser data of four groundwater supplied lakes around Augsburg, Germany, captured in April 2018 served as the basis for testing and validating the approach. With systematic depth biases less than 15 cm and a standard deviation of around 40 cm, the results satisfy the vertical accuracy limit Bc7 defined by the International Hydrographic Organization. Further improvements are anticipated by extending BathyNet to include a simultaneous semantic segmentation branch.
CITATION STYLE
Mandlburger, G., Kölle, M., Nübel, H., & Soergel, U. (2021). BathyNet: A Deep Neural Network for Water Depth Mapping from Multispectral Aerial Images. PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science, 89(2), 71–89. https://doi.org/10.1007/s41064-021-00142-3
Mendeley helps you to discover research relevant for your work.