Concadia: Towards Image-Based Text Generation with a Purpose

18Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Current deep learning models often achieve excellent results on benchmark image-to-text datasets but fail to generate texts that are useful in practice. We argue that to close this gap, it is vital to distinguish descriptions from captions based on their distinct communicative roles. Descriptions focus on visual features and are meant to replace an image (often to increase accessibility), whereas captions appear alongside an image to supply additional information. To motivate this distinction and help people put it into practice, we introduce the publicly available Wikipedia-based dataset Concadia consisting of 96, 918 images with corresponding English-language descriptions, captions, and surrounding context. Using insights from Concadia, models trained on it, and a preregistered human-subjects experiment with human- and model-generated texts, we characterize the commonalities and differences between descriptions and captions. In addition, we show that, for generating both descriptions and captions, it is useful to augment image-to-text models with representations of the textual context in which the image appeared.

Cite

CITATION STYLE

APA

Kreiss, E., Fang, F., Goodman, N. D., & Potts, C. (2022). Concadia: Towards Image-Based Text Generation with a Purpose. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 4667–4684). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.308

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free