Segmentation of Remote Sensing Images Based on U-Net Multi-Task Learning

4Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In order to accurately segment architectural features in high-resolution remote sensing images, a semantic segmentation method based on U-net network multi-task learning is proposed. First, a boundary distance map was generated based on the remote sensing image of the ground truth map of the building. The remote sensing image and its truth map were used as the input in the U-net network, followed by the addition of the building ground prediction layer at the end of the U-net network. Based on the ResNet network, a multi-task network with the boundary distance prediction layer was built. Experiments involving the ISPRS aerial remote sensing image building and feature annotation data set show that compared with the full convolutional network combined with the multi-layer perceptron method, the intersection ratio of VGG16 network, VGG16 + boundary prediction, ResNet50 and the method in this paper were increased by 5.15%, 6.946%, 6.41% and 7.86%. The accuracy of the networks was increased to 94.71%, 95.39%, 95.30% and 96.10% respectively, which resulted in high-precision extraction of building features.

Cite

CITATION STYLE

APA

Ruiwen, N., Ye, M., Ji, L., Tong, Z., Tianye, L., Ruilong, F., … Tyasi, T. L. (2022). Segmentation of Remote Sensing Images Based on U-Net Multi-Task Learning. Computers, Materials and Continua, 73(2), 3263–3274. https://doi.org/10.32604/cmc.2022.026881

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free