TIME: Text and Image Mutual-Translation Adversarial Networks

22Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

Abstract

Focusing on text-to-image (T2I) generation, we propose Text and Image Mutual-Translation Adversarial Networks (TIME), a lightweight but effective model that jointly learns a T2I generator G and an image captioning discriminator D under the Generative Adversarial Network framework. While previous methods tackle the T2I problem as a uni-directional task and use pre-trained language models to enforce the image-text consistency, TIME requires neither extra modules nor pre-training. We show that the performance of G can be boosted substantially by training it jointly with D as a language model. Specifically, we adopt Transformers to model the cross-modal connections between the image features and word embeddings, and design an annealing conditional hinge loss that dynamically balances the adversarial learning. In our experiments, TIME achieves state-of-the-art (SOTA) performance on the CUB dataset (Inception Score of 4.91 and Fréchet Inception Distance of 14.3 on CUB), and shows promising performance on MS-COCO dataset on image captioning and downstream vision-language tasks.

Cite

CITATION STYLE

APA

Liu, B., Song, K., Zhu, Y., de Melo, G., & Elgammal, A. (2021). TIME: Text and Image Mutual-Translation Adversarial Networks. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 3A, pp. 2082–2090). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i3.16305

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free