In this paper, we study the problem of multi-domain image generation, the goal of which is to generate pairs of corresponding images from different domains. With the recent development in generative models, image generation has achieved great progress and has been applied to various computer vision tasks. However, multi-domain image generation may not achieve the desired performance due to the difficulty of learning the correspondence of different domain images, especially when the information of paired samples is not given. To tackle this problem, we propose Regularized Conditional GAN (RegCGAN) which is capable of learning to generate corresponding images in the absence of paired training data. RegCGAN is based on the conditional GAN, and we introduce two regularizers to guide the model to learn the corresponding semantics of different domains. We evaluate the proposed model on several tasks for which paired training data is not given, including the generation of edges and photos, the generation of faces with different attributes, etc. The experimental results show that our model can successfully generate corresponding images for all these tasks, while outperforms the baseline methods. We also introduce an approach of applying RegCGAN to unsupervised domain adaptation.
CITATION STYLE
Mao, X., & Li, Q. (2018). Unpaired multi-domain image generation via regularized conditional GANs. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 2553–2559). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/354
Mendeley helps you to discover research relevant for your work.