Due to technology and cost limitations, it is challenging to obtain high temporal and spatial resolution images from a single satellite spectrometer, which significantly limits the specific application of such remote sensing images in earth science. To solve the problem that the existing algorithms cannot effectively balance the spatial detail preservation and spectral change reconstruction, a pseudo-Siamese deep convolutional neural network (PDCNN) for spatiotemporal fusion is proposed in this article. The method proposes a pseudo-Siamese network framework model for fusion. This framework has two independent and equal feature extraction streams, but the weights are not shared. The two feature extraction streams process the image information at the previous and later moments and reconstruct the fine image of the corresponding time to fully extract the image information at different times. In the feature extraction stream, the multiscale mechanism and dilated convolution of flexible perception are designed, which can flexibly obtain feature image information and improve the model reconstruction accuracy. In addition, an attention mechanism is introduced to improve the weight of the crucial information for the remote sensing images. Adding a residual connection enhances the reuse of the initial feature information in shallow networks and reduces the loss of feature information in deep networks. Finally, the fine images obtained from the two feature extraction streams are weighted and fused to obtain the final predicted image. The subjective and objective results demonstrate that the PDCNN can effectively reconstruct the fusion image with higher quality.
CITATION STYLE
Li, W., Yang, C., Peng, Y., & Du, J. (2022). A Pseudo-Siamese Deep Convolutional Neural Network for Spatiotemporal Satellite Image Fusion. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 15, 1205–1220. https://doi.org/10.1109/JSTARS.2022.3143464
Mendeley helps you to discover research relevant for your work.