Mixgan: Learning concepts from different domains for mixture generation

6Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

In this work, we present an interesting attempt on mixture generation: absorbing different image concepts (e.g., content and style) from different domains and thus generating a new domain with learned concepts. In particular, we propose a mixture generative adversarial network (MIXGAN). MIXGAN learns concepts of content and style from two domains respectively, and thus can join them for mixture generation in a new domain, i.e., generating images with content from one domain and style from another. MIXGAN overcomes the limitation of current GAN-based models which either generate new images in the same domain as they observed in training stage, or require off-the-shelf content templates for transferring or translation. Extensive experimental results demonstrate the effectiveness of MIXGAN as compared to related state-of-the-art GAN-based models.

Cite

CITATION STYLE

APA

Hao, G. Y., Yu, H. X., & Zheng, W. S. (2018). Mixgan: Learning concepts from different domains for mixture generation. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 2212–2219). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/306

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free