GANHopper: Multi-hop GAN for Unsupervised Image-to-Image Translation

11Citations
Citations of this article
92Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We introduce GANHopper, an unsupervised image-to-image translation network that transforms images gradually between two domains, through multiple hops. Instead of executing translation directly, we steer the translation by requiring the network to produce in-between images that resemble weighted hybrids between images from the input domains. Our network is trained on unpaired images from the two domains only, without any in-between images. All hops are produced using a single generator along each direction. In addition to the standard cycle-consistency and adversarial losses, we introduce a new hybrid discriminator, which is trained to classify the intermediate images produced by the generator as weighted hybrids, with weights based on a predetermined hop count. We also add a smoothness term to constrain the magnitude of each hop, further regularizing the translation. Compared to previous methods, GANHopper excels at image translations involving domain-specific image features and geometric variations while also preserving non-domain-specific features such as general color schemes.

Cite

CITATION STYLE

APA

Lira, W., Merz, J., Ritchie, D., Cohen-Or, D., & Zhang, H. (2020). GANHopper: Multi-hop GAN for Unsupervised Image-to-Image Translation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12371 LNCS, pp. 363–379). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58574-7_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free