ESA-CycleGAN: Edge feature and self-attention based cycle-consistent generative adversarial network for style transfer

N/ACitations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Nowadays, style transfer is used in a wide range of commercial applications, such as image beautification, film rendering etc. However, many existing methods of style transfer suffer from loss of details and poor overall visual effect. To address these problems, an edge feature and self-attention based cycle-consistent generative adversarial network (ESA-CycleGAN) is proposed. The model architecture consists of a generator, a discriminator, and an edge feature extraction network. Both the generator and the discriminator contain a self-attention module to capture global features of the image. The edge feature extraction network extracts the edge of the original image and feeds it into the network together with the original image, thereby allowing better processing of details. Besides, a perceptual loss term is added to optimize the network, resulting in better perceptual results. ESA-CycleGAN is applied on four datasets, respectively. The experimental results show that the authors’ computed final IS and FID values have good results compared to the results of several other existing models, indicating the superiority of the model in style transfer, which can better preserve the details of the original images with better image quality.

Cite

CITATION STYLE

APA

Wang, L., Wang, L., & Chen, S. (2022). ESA-CycleGAN: Edge feature and self-attention based cycle-consistent generative adversarial network for style transfer. IET Image Processing, 16(1), 176–190. https://doi.org/10.1049/ipr2.12342

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free