Improving Generative Moment Matching Networks with Distribution Partition

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Generative moment matching networks (GMMN) present a theoretically sound approach to learning deep generative models. However, such methods are typically limited by the high sample complexity, thereby impractical in generating complex data. In this paper, we present a new strategy to train GMMN with a low sample complexity while retaining the theoretical soundness. Our method introduces some auxiliary variables, whose values are provided by a pre-trained model such as an encoder network in practice. Conditioned on these variables, we partition the distribution into a set of conditional distributions, which can be effectively matched with a low sample complexity. We instantiate this strategy by presenting an amortized network called GMMN-DP with shared auxiliary variable information for the data generation task, as well as developing an efficient stochastic training algorithm. The experimental results show that GMMN-DP can generate complex samples on datasets such as CelebA and CIFAR-10, where the vanilla GMMN fails.

Cite

CITATION STYLE

APA

Ren, Y., Luo, Y., & Zhu, J. (2021). Improving Generative Moment Matching Networks with Distribution Partition. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 11A, pp. 9403–9410). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i11.17133

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free