Abstract
Self-attention has been successfully leveraged for long-range feature-wise similarities in deep learning super-resolution (SR) methods. However, most of the SR methods only explore the features on the original scale, but do not take full advantage of self-similarities features on different scales especially in generative adversarial networks (GAN). In this paper, self-similarity generative adversarial networks (SSGAN) are proposed as the SR framework. The framework establishes the multi-scale feature correlation by adding two modules to the generative network: downscale attention block (DAB) and upscale attention block (UAB). Specifically, DAB is designed to restore the repetitive details from the corresponding downsampled image, which achieves multi-scale feature restoration through self-similarity. And UAB improves the baseline up-sampling operations and captures low-resolution to high-resolution feature mapping, which enhances the cross-scale repetitive features to reconstruct the high-resolution image. Experimental results demonstrate that the proposed SSGAN achieve better visual performance especially in the similar pattern details.
Cite
CITATION STYLE
Wang, S., Sun, Z., & Li, Q. (2023). Image super-resolution based on self-similarity generative adversarial networks. IET Image Processing, 17(1), 157–165. https://doi.org/10.1049/ipr2.12624
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.