Remote sensing image fusion (RSIF) can generate an integrated image with high spatial and spectral resolution. The fused remote sensing image is conducive to applications including disaster monitoring, ecological environment investigation, and dynamic monitoring. However, most existing deep learning based RSIF methods require ground truths (or reference images) to train a model, and the acquisition of ground truths is a difficult problem. To address this, we propose a semisupervised RSIF method based on the multiscale conditional generative adversarial networks by combining the multiskip connection and pseudo-Siamese structure. This new method can simultaneously extract the features of panchromatic and multispectral images to fuse them without a ground truth; the adopted multiskip connection contributes to presenting image details. In addition, we propose a composite loss function, which combines the least squares loss, L1 loss, and peak signal-to-noise ratio loss to train the model; the composite loss function can help to retain the spatial details and spectral information of the source images. Moreover, we verify the proposed method by extensive experiments, and the results show that the new method can achieve outstanding performance without relying on the ground truth.
CITATION STYLE
Jin, X., Huang, S., Jiang, Q., Lee, S. J., Wu, L., & Yao, S. (2021). Semisupervised Remote Sensing Image Fusion Using Multiscale Conditional Generative Adversarial Network with Siamese Structure. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14, 7066–7084. https://doi.org/10.1109/JSTARS.2021.3090958
Mendeley helps you to discover research relevant for your work.