LMCAP: Few-shot Multilingual Image Captioning by Retrieval Augmented Language Model Prompting

9Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multilingual image captioning has recently been tackled by training with large-scale machine translated data, which is an expensive, noisy, and time-consuming process. Without requiring any multilingual caption data, we propose LMCAP, an image-blind few-shot multilingual captioning model that works by prompting a language model with retrieved captions. Specifically, instead of following the standard encoder-decoder paradigm, given an image, LMCAP first retrieves the captions of similar images using a multilingual CLIP encoder. These captions are then combined into a prompt for an XGLM decoder, in order to generate captions in the desired language. In other words, the generation model does not directly process the image, instead processing retrieved captions. Experiments on the XM3600 dataset of geographically diverse images show that our model is competitive with fully-supervised multilingual captioning models, without requiring any supervised training on any captioning data.

Cite

CITATION STYLE

APA

Ramos, R., Martins, B., & Elliott, D. (2023). LMCAP: Few-shot Multilingual Image Captioning by Retrieval Augmented Language Model Prompting. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 1635–1651). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.104

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free