Spatial resolution is an important indicator that measures the quality of remote sensing images. Image texture has been successfully recovered by generative adversarial networks in deep learning super-resolution (SR) methods. However, the existing methods are prone to image texture distortion. To solve the above problems, this paper proposes an improved generative adversarial network to enhance the super-resolution reconstruction effect of medium- and low-resolution (LR) remote sensing images. This network is based on the Super Resolution Generative Adversarial Network (SRGAN), which makes great improvements in the structure of the connection between the inside and outside of the residual block and the design of the model loss function. At the same time, the G1–G2–G3 structure between residuals effectively combines the image information of the three scales of small, medium and large. The model loss function can be designed based on the Charbonnier loss function to narrow the pixel distance between the reconstructed remote sensing image and the original image. Furthermore, targeted perceptual loss can direct the network to restore the texture details of the image according to the semantic category. The subjective and objective evaluation of the generated images and the ablation experiments prove that compared with SRGAN and other networks, our method can generate more realistic and reliable textures. Additionally, the indicators [peak signal-to-noise ratio (PSNR), structural similarity (SSIM), multiscale structural similarity (MS-SSIM)] used to measure the quality of the reconstructed image obtain improved objective quantitative evaluation.
CITATION STYLE
Guo, J., Lv, F., Shen, J., Liu, J., & Wang, M. (2023). An improved generative adversarial network for remote sensing image super-resolution. IET Image Processing, 17(6), 1852–1863. https://doi.org/10.1049/ipr2.12760
Mendeley helps you to discover research relevant for your work.