Stochastically Flipping Labels of Discriminator's Outputs for Training Generative Adversarial Networks

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Generative Adversarial Networks (GANs) play the adversarial game between two neural networks: the generator and the discriminator. Many studies treat the discriminator's outputs as an implicit posterior distribution prior to the input image distribution. Thus, increasing the discriminator's output dimensions can represent richer information than a single output dimension of the discriminator. However, increasing the output dimensions will lead to a very strong discriminator, which can easily surpass the generator and break the balance of adversarial learning. Solving such conflict and elevating the generation quality of GANs remains challenging. Hence, we propose a simple yet effective method to solve this conflict problem based on a stochastic selecting method by extending the flipped and non-flipped non-saturating losses in BipGAN. We organized our experiments based on the famous BigGAN and StyleGAN models for comparison. Our experiments successfully validated our approach to strengthening the generation quality within limited output dimensions via several standard evaluation metrics and real-world datasets and achieved competitive results in the Human face generation task.

Cite

CITATION STYLE

APA

Yang, R., Vo, D. M., & Nakayama, H. (2022). Stochastically Flipping Labels of Discriminator’s Outputs for Training Generative Adversarial Networks. IEEE Access, 10, 103644–103654. https://doi.org/10.1109/ACCESS.2022.3210130

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free