Image-Based CLIP-Guided Essence Transfer

10Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We make the distinction between (i) style transfer, in which a source image is manipulated to match the textures and colors of a target image, and (ii) essence transfer, in which one edits the source image to include high-level semantic attributes from the target. Crucially, the semantic attributes that constitute the essence of an image may differ from image to image. Our blending operator combines the powerful StyleGAN generator and the semantic encoder of CLIP in a novel way that is simultaneously additive in both latent spaces, resulting in a mechanism that guarantees both identity preservation and high-level feature transfer without relying on a facial recognition network. We present two variants of our method. The first is based on optimization, while the second fine-tunes an existing inversion encoder to perform essence extraction. Through extensive experiments, we demonstrate the superiority of our methods for essence transfer over existing methods for style transfer, domain adaptation, and text-based semantic editing. Our code is available at: https://github.com/hila-chefer/TargetCLIP.

Cite

CITATION STYLE

APA

Chefer, H., Benaim, S., Paiss, R., & Wolf, L. (2022). Image-Based CLIP-Guided Essence Transfer. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13673 LNCS, pp. 695–711). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-19778-9_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free