Recently, remote sensing image super-resolution (RSISR) has drawn considerable attention and made great breakthroughs based on convolutional neural networks (CNNs). Due to the scale and richness of texture and structural information frequently recurring inside the same remote sensing images (RSIs) but varying greatly with different RSIs, state-of-the-art CNN-based methods have begun to explore the multiscale global features in RSIs by using attention mechanisms. However, they are still insufficient to explore significant content attention clues in RSIs. In this article, we present a new hybrid attention-based U-shaped network (HAUNet) for RSISR to effectively explore the multiscale features and enhance the global feature representation by hybrid convolution-based attention. It contains two kinds of convolutional attention-based single-scale feature extraction modules (SEM) to explore the global spatial context information and abstract content information, and a cross-scale interaction module (CIM) as the skip connection between different scale feature outputs of encoders to bridge the semantic and resolution gaps between them. Considering the existence of equipment with poor hardware facilities, we further design a lighter HAUNet-S with about 596K parameters. Experimental attribution analysis method LAM results demonstrate that our HAUNet is a more efficient way to capture meaningful content information and quantitative results can show that our HAUNet can significantly improve the performance of RSISR on four remote sensing test datasets. Meanwhile, HAUNET-S also maintains competitive performance. Our code is available at https://github.com/likakakaka/HAUNet_RSISR.
CITATION STYLE
Wang, J., Wang, B., Wang, X., Zhao, Y., & Long, T. (2023). Hybrid Attention-Based U-Shaped Network for Remote Sensing Image Super-Resolution. IEEE Transactions on Geoscience and Remote Sensing, 61. https://doi.org/10.1109/TGRS.2023.3283769
Mendeley helps you to discover research relevant for your work.