SAR-HUB: Pre-Training, Fine-Tuning, and Explaining

4Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Since the current remote sensing pre-trained models trained on optical images are not as effective when applied to SAR image tasks, it is crucial to create sensor-specific SAR models with generalized feature representations and to demonstrate with evidence the limitations of optical pre-trained models in downstream SAR tasks. The following aspects are the focus of this study: pre-training, fine-tuning, and explaining. First, we collect the current large-scale open-source SAR scene image classification datasets to pre-train a series of deep neural networks, including convolutional neural networks (CNNs) and vision transformers (ViT). A novel dynamic range adaptive enhancement method and a mini-batch class-balanced loss are proposed to tackle the challenges in SAR scene image classification. Second, the pre-trained models are transferred to various SAR downstream tasks compared with optical ones. Lastly, we propose a novel knowledge point interpretation method to reveal the benefits of the SAR pre-trained model with comprehensive and quantifiable explanations. This study is reproducible using open-source code and datasets, demonstrates generalization through extensive experiments on a variety of tasks, and is interpretable through qualitative and quantitative analyses. The codes and models are open source.

Cite

CITATION STYLE

APA

Yang, H., Kang, X., Liu, L., Liu, Y., & Huang, Z. (2023). SAR-HUB: Pre-Training, Fine-Tuning, and Explaining. Remote Sensing, 15(23). https://doi.org/10.3390/rs15235534

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free