Flow-GaN: Combining maximum likelihood and adversarial learning in generative models

99Citations
Citations of this article
325Readers
Mendeley users who have this article in their library.

Abstract

Adversarial learning of probabilistic models has recently emerged as a promising alternative to maximum likelihood. Implicit models such as generative adversarial networks (GAN) often generate better samples compared to explicit models trained by maximum likelihood. Yet, GANs sidestep the characterization of an explicit density which makes quantitative evaluations challenging. To bridge this gap, we propose Flow-GANs, a generative adversarial network for which we can perform exact likelihood evaluation, thus supporting both adversarial and maximum likelihood training. When trained adversarially, Flow-GANs generate high-quality samples but attain extremely poor log-likelihood scores, inferior even to a mixture model memorizing the training data; the opposite is true when trained by maximum likelihood. Results on MNIST and CIFAR-10 demonstrate that hybrid training can attain high held-out likelihoods while retaining visual fidelity in the generated samples.

Cite

CITATION STYLE

APA

Grover, A., Dhar, M., & Ermon, S. (2018). Flow-GaN: Combining maximum likelihood and adversarial learning in generative models. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 3069–3076). AAAI press. https://doi.org/10.1609/aaai.v32i1.11829

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free