Urban building extraction from high-resolution remote sensing imagery is important for urban planning, population statistics, and disaster assessment. However, the high density and slight boundary differences of urban building regions pose a great challenge for accurate building extraction. Although existing building extraction methods have achieved better results in urban building extraction, there are still some problems, such as boundary information loss, poor extraction effect for dense regions, and serious interference by building shadows. To accurately extract building regions from high-resolution remote sensing images, in this study, we propose a practical method for building extraction based on convolution neural networks (CNNs). Firstly, the multi-scale recurrent residual convolution is introduced into the generative network to extract the multi-scale and multi-resolution features of remote sensing images. Secondly, the attention gates skip connection (AGs) is used to enhance the information interaction between different scale features. Finally, the adversarial network with parallel architecture is used to decrease the difference between the extracted results and the ground truths. Moreover, the conditional information constraint is introduced in the training process to improve robustness and generalization ability of the proposed method. The qualitative and quantitative analyses are performed on IAILD and Massachusetts datasets. The experimental results show that the proposed method can accurately and effectively extract building regions from remote sensing images.
CITATION STYLE
Wang, Z., Xu, N., Wang, B., Liu, Y., & Zhang, S. (2022). Urban building extraction from high-resolution remote sensing imagery based on multi-scale recurrent conditional generative adversarial network. GIScience and Remote Sensing, 59(1), 861–884. https://doi.org/10.1080/15481603.2022.2076382
Mendeley helps you to discover research relevant for your work.