Improvement of learning stability of generative adversarial network using variational learning

4Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we propose a new network model using variational learning to improve the learning stability of generative adversarial networks (GAN). The proposed method can be easily applied to improve the learning stability of GAN-based models that were developed for various purposes, given that the variational autoencoder (VAE) is used as a secondary network while the basic GAN structure is maintained. When the gradient of the generator vanishes in the learning process of GAN, the proposed method receives gradient information from the decoder of the VAE that maintains gradient stably, so that the learning processes of the generator and discriminator are not halted. The experimental results of the MNIST and the CelebA datasets verify that the proposed method improves the learning stability of the networks by overcoming the vanishing gradient problem of the generator, and maintains the excellent data quality of the conventional GAN-based generative models.

Cite

CITATION STYLE

APA

Lee, J. Y., & Choi, S. I. (2020). Improvement of learning stability of generative adversarial network using variational learning. Applied Sciences (Switzerland), 10(13). https://doi.org/10.3390/app10134528

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free