Scene Classification of Remote Sensing Images Based on Saliency Dual Attention Residual Network

38Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

Scene classification of high-resolution Remote Sensing Images (RSI) is one of basic challenges in RSI interpretation. Existing scene classification methods based on deep learning have achieved impressive performances. However, since RSI commonly contain various types of ground objects and complex backgrounds, most of methods cannot focus on saliency features of scene, which limits the classification performances. To address this issue, we propose a novel Saliency Dual Attention Residual Network (SDAResNet) to extract both cross-channel and spatial saliency information for scene classification of RSI. More specifically, the proposed SDAResNet consists of spatial attention and channel attention, in which spatial attention is embedded in low-level feature to emphasize saliency location information and suppress background information, and channel attention is integrated to high-level features to extract saliency meaningful information. Additionally, several image classification tricks are used to further improve classification accuracy. Finally, Extensive experiments on two challenging benchmark RSI datasets are presented to demonstrate that our methods outperform most of state-of-the-art approaches significantly.

Cite

CITATION STYLE

APA

Guo, D., Xia, Y., & Luo, X. (2020). Scene Classification of Remote Sensing Images Based on Saliency Dual Attention Residual Network. IEEE Access, 8, 6344–6357. https://doi.org/10.1109/ACCESS.2019.2963769

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free