Self-attention starGAN for multi-domain image-to-image translation

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a Self-attention StarGAN by introducing the self-attention mechanism into StarGAN to deal with multi-domain image-to-image translation, aiming to generate images with high-quality details and obtain consistent backgrounds. The self-attention mechanism models the long-range dependencies among the feature maps at all positions, which is not limited to the local image regions. Simultaneously, we take the advantage of batch normalization to reduce reconstruction error and generate fine-grained texture details. We adopt spectral normalization in the network to stabilize the training of Self-attention StarGAN. Both quantitative and qualitative experiments on a public dataset have been conducted. The experimental results demonstrate that the proposed model achieves lower reconstruction error and generates images in higher quality compared to StarGAN. We exploit Amazon Mechanical Turk (AMT) for perceptual evaluation, and 68.1% of all 1,000 AMT Turkers agree that the backgrounds of the images generated by Self-attention StarGAN are more consistent with the original images.

Cite

CITATION STYLE

APA

He, Z., Yang, Z., Mao, X., Lv, J., Li, Q., & Liu, W. (2019). Self-attention starGAN for multi-domain image-to-image translation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11729 LNCS, pp. 537–549). Springer Verlag. https://doi.org/10.1007/978-3-030-30508-6_43

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free