SemiStarGAN: Semi-supervised Generative Adversarial Networks for Multi-domain Image-to-Image Translation

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent studies have shown significant advance for multi-domain image-to-image translation, and generative adversarial networks (GANs) are widely used to address this problem. However, to train an effective image generator, existing methods all require a large number of domain-labeled images, which may take time and effort to collect for real-world problems. In this paper, we propose SemiStarGAN, a semi-supervised GAN network to tackle this issue. The proposed method utilizes unlabeled images by incorporating a novel discriminator/classifier network architecture—Y model, and two existing semi-supervised learning techniques—pseudo labeling and self-ensembling. Experimental results on the CelebA dataset using domains of facial attributes show that the proposed method achieves comparable performance with state-of-the-art methods using considerably less labeled training images.

Cite

CITATION STYLE

APA

Hsu, S. Y., Yang, C. Y., Huang, C. C., & Hsu, J. Y. jen. (2019). SemiStarGAN: Semi-supervised Generative Adversarial Networks for Multi-domain Image-to-Image Translation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11364 LNCS, pp. 338–353). Springer Verlag. https://doi.org/10.1007/978-3-030-20870-7_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free