Image super-resolution based on conditional generative adversarial network

7Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Generative adversarial network (GAN) is one of the most prevalent generative models that can synthesise realistic high-frequency details. However, a mismatch between the input and the output may arise when GAN is directly applied to image super-resolution. To alleviate this issue, the authors adopted a conditional GAN (cGAN) in this study. The cGAN discriminator attempted to guess whether the unknown high-resolution (HR) image was produced by the generator with the aid of the original low-resolution (LR) image. They propose a novel discriminator that only penalises at the scale of the patch and, thus, has relatively few parameters to train. The generator of cGAN is an encoder-decoder with skip connections to shuttle the shared low-level information directly across the network. To better maintain the low-frequency information and recover the high-frequency information, they designed a generator loss function combining adversarial loss term and L1 loss term. The former term is beneficial to the synthesis of fine-grained textures, while the latter is responsible for learning the overall structure of the LR input. The experiments revealed that the proposed method could generate HR images with richer details and less over-smoothness.

Cite

CITATION STYLE

APA

Gao, H., Chen, Z., Huang, B., Chen, J., & Li, Z. (2020). Image super-resolution based on conditional generative adversarial network. IET Image Processing, 14(13), 3076–3083. https://doi.org/10.1049/iet-ipr.2018.5767

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free