CGGLNet: Semantic Segmentation Network for Remote Sensing Images Based on Category-Guided Global-Local Feature Interaction

42Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As spatial resolution increases, the information conveyed by remote sensing images becomes more and more complex. Large-scale variation and highly discrete distribution of objects greatly increase the challenge of the semantic segmentation task for remote sensing images. Mainstream approaches usually use implicit attention mechanisms or transformer modules to achieve global context for good results. However, these approaches fail to explicitly extract intraobject consistency and interobject saliency features leading to unclear boundaries and incomplete structures. In this article, we propose a category-guided global-local feature interaction network (CGGLNet), which utilizes category information to guide the modeling of global contextual information. To better acquire global information, we proposed a category-guided supervised transformer module (CGSTM). This module guides the modeling of global contextual information by estimating the potential class information of pixels so that features of the same class are more aggregated and those of different classes are more easily distinguished. To enhance the representation of local detailed features of multiscale objects, we designed the adaptive local feature extraction module (ALFEM). By parallel connection of the CGSTM and the ALFEM, our network can extract rich global and local context information contained in the image. Meanwhile, the designed feature refinement segmentation head (FRSH) helps to reduce the semantic difference between deep and shallow features and realizes the full integration of different levels of information. Extensive ablation and comparison experiments on two public remote sensing datasets (ISPRS Vaihingen dataset and ISPRS Potsdam dataset) indicate that our proposed CGGLNet achieves superior performance compared to the state-of-the-art methods.

Cite

CITATION STYLE

APA

Ni, Y., Liu, J., Chi, W., Wang, X., & Li, D. (2024). CGGLNet: Semantic Segmentation Network for Remote Sensing Images Based on Category-Guided Global-Local Feature Interaction. IEEE Transactions on Geoscience and Remote Sensing, 62, 1–17. https://doi.org/10.1109/TGRS.2024.3379398

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free