Generative adversarial image super-resolution network for multiple degradations

9Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

The existing single image super-resolution methods based on deep learning cannot handle multiple degradations well, and the generated image tends to be blurred and over-smoothed due to poor generalisation ability. In this study, the authors propose a method based on a generative adversarial network (GAN) to deal with multiple degradations. In the generator network, blur kernel and noise level are used as input through dimensionality stretching strategy preprocessing to make full use of prior knowledge. In addition, three discriminators with different scales are used in the discriminator network to pay attention to the reconstruction of image details while focusing on the global consistency of the image. For the problems of vanishing gradient and mode collapse existing in GAN-based methods, a gradient penalty term is added in the loss function. Extensive experiments demonstrate that the proposed method not only can handle multiple degradations to obtain state-of-the-art performance, but also deliver visually credible results in real scenes.

Cite

CITATION STYLE

APA

Lin, H., Fan, J., Zhang, Y., & Peng, D. (2020). Generative adversarial image super-resolution network for multiple degradations. IET Image Processing, 14(17), 4520–4527. https://doi.org/10.1049/iet-ipr.2020.1176

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free