Learning to Sample with Adversarially Learned Likelihood-Ratio

  • Li C
  • Li J
  • Wang G
  • et al.
N/ACitations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

We link the reverse KL divergence with adversarial learning. This insight enables learning to synthesize realistic samples in two settings: (i) Given a set of samples from the true distribution, an adversarially learned likelihood-ratio and a new entropy bound are used to learn a GAN model, that improves synthesized sample quality relative to previous GAN variants. (ii) Given an unnormalized distribution, a reference-based framework is proposed to learn to draw samples, naturally yielding an adversarial scheme to amortize MCMC/SVGD samples. Experimental results show the improved performance of the derived algorithms. 1 BACKGROUND ON THE REVERSE KL DIVERGENCE Target Distribution Assume we are given a set of samples D = {x i } i=1,N , with each sample assumed drawn iid from an unknown distribution q(x). For x ∈ X , let S q ⊂ X represent the support of q, implying that S q is the smallest subset of X for which Sq q(x)dx = 1 (or Sq q(x)dx = 1 − , for → 0 +). Let S o q represent the complement set of S q , i.e., S q ∪ S o q = X and S q ∩ S o q = ∅.

Cite

CITATION STYLE

APA

Li, C., Li, J., Wang, G., & Carin, L. (2018). Learning to Sample with Adversarially Learned Likelihood-Ratio. Iclr 2018, (2), 1–6. Retrieved from https://openreview.net/forum?id=S1eZGHkDM

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free