Generating Images from Caption and Vice Versa via CLIP-Guided Generative Latent Space Search

41Citations
Citations of this article
59Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this research work we present CLIP-GLaSS, a novel zero-shot framework to generate an image (or a caption) corresponding to a given caption (or image). CLIP-GLaSS is based on the CLIP neural network, which, given an image and a descriptive caption, provides similar embeddings. Differently, CLIP-GLaSS takes a caption (or an image) as an input, and generates the image (or the caption) whose CLIP embedding is the most similar to the input one. This optimal image (or caption) is produced via a generative network, after an exploration by a genetic algorithm. Promising results are shown, based on the experimentation of the image Generators BigGAN and StyleGAN2, and of the text Generator GPT2.

Cite

CITATION STYLE

APA

Galatolo, F. A., Cimino, M. G. C. A., & Vaglini, G. (2021). Generating Images from Caption and Vice Versa via CLIP-Guided Generative Latent Space Search. In Proceedings of the International Conference on Image Processing and Vision Engineering, IMPROVE 2021 (pp. 166–174). SciTePress. https://doi.org/10.5220/0010503701660174

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free