This paper provides the comparative analysis between two recent image-to-image translation models that based on Generative Adversarial Networks. The first one is UNIT which consists of coupled GANs and variational autoencoders (VAEs) with shared-latent space, and the second one is Star-GAN which contains a single GAN model. Given training data from two different domains from dataset CelebA, these two models learn translation task in two directions. The term domain denotes as a set of images sharing the same attribute value. So, the attributes that are prepared: eye glasses, blond hair, beard, smiling and age. Five UNIT models are trained separately, while only one Star-GAN model is trained. For evaluation, we conduct some experiments and provide a quantitative comparison using direct metric GAM (Generative Adversarial Metric) to quantify the ability of generalization and the ability of generating photorealistic photos. The experimental results show the superiority of cross-model UNIT over multi-model StarGAN on generating age and eye glasses attributes, and the equivalent performance to synthesize other attributes.
CITATION STYLE
Zeno, B., Kalinovskiy, I., & Matveev, Y. (2019). Comparative Review of Cross-Domain Generative Adversarial Networks. In IOP Conference Series: Materials Science and Engineering (Vol. 618). Institute of Physics Publishing. https://doi.org/10.1088/1757-899X/618/1/012012
Mendeley helps you to discover research relevant for your work.