An Adaptive Multiscale Fusion Network Based on Regional Attention for Remote Sensing Images

4Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

With the widespread application of semantic segmentation in remote sensing images with high-resolution, how to improve the accuracy of segmentation becomes a research goal in the remote sensing field. An innovative Fully Convolutional Network (FCN) is proposed based on regional attention for improving the performance of the semantic segmentation framework for remote sensing images. The proposed network follows the encoder-decoder architecture of semantic segmentation and includes the following three strategies to improve segmentation accuracy. The enhanced GCN module is applied to capture the semantic features of remote sensing images. MGFM is proposed to capture different contexts by sampling at different densities. Furthermore, RAM is offered to assign large weights to high-value information in different regions of the feature map. Our method is assessed on two datasets: ISPRS Potsdam dataset and CCF dataset. The results indicate that our model with those strategies outperforms baseline models (DCED50) concerning F1, mean IoU and PA, 10.81%,19.11%, and 11.36% on the Potsdam dataset and 29.26%, 27.64% and 13.57% on the CCF dataset.

Cite

CITATION STYLE

APA

Lu, W., Liang, L., Wu, X., Wang, X., & Cai, J. (2020). An Adaptive Multiscale Fusion Network Based on Regional Attention for Remote Sensing Images. IEEE Access, 8, 107802–107813. https://doi.org/10.1109/ACCESS.2020.3000425

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free