Compositional generalization in image captioning

25Citations
Citations of this article
117Readers
Mendeley users who have this article in their library.

Abstract

Image captioning models are usually evaluated on their ability to describe a held-out set of images, not on their ability to generalize to unseen concepts. We study the problem of compositional generalization, which measures how well a model composes unseen combinations of concepts when describing images. State-of-the-art image captioning models show poor generalization performance on this task. We propose a multi-task model to address the poor performance, that combines caption generation and image-sentence ranking, and uses a decoding mechanism that re-ranks the captions according their similarity to the image. This model is substantially better at generalizing to unseen combinations of concepts compared to state-of-the-art captioning models.

Cite

CITATION STYLE

APA

Nikolaus, M., Abdou, M., Lamm, M., Aralikatte, R., & Elliott, D. (2019). Compositional generalization in image captioning. In CoNLL 2019 - 23rd Conference on Computational Natural Language Learning, Proceedings of the Conference (pp. 87–98). Association for Computational Linguistics. https://doi.org/10.18653/v1/k19-1009

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free