Clue: Cross-modal coherence modeling for caption generation

37Citations
Citations of this article
143Readers
Mendeley users who have this article in their library.

Abstract

We use coherence relations inspired by computational models of discourse to study the information needs and goals of image captioning. Using an annotation protocol specifically devised for capturing image-caption coherence relations, we annotate 10,000 instances from publicly-available image-caption pairs. We introduce a new task for learning inferences in imagery and text, coherence relation prediction, and show that these coherence annotations can be exploited to learn relation classifiers as an intermediary step, and also train coherence-aware, controllable image captioning models. The results show a dramatic improvement in the consistency and quality of the generated captions with respect to information needs specified via coherence relations.

Cite

CITATION STYLE

APA

Alikhani, M., Sharma, P., Li, S., Soricut, R., & Stone, M. (2020). Clue: Cross-modal coherence modeling for caption generation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 6525–6535). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.583

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free