Stop Pre-Training: Adapt Visual-Language Models to Unseen Languages

0Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Vision-Language Pre-training (VLP) has advanced the performance of many vision-language tasks, such as image-text retrieval, visual entailment, and visual reasoning. The pre-training mostly utilizes lexical databases and image queries in English. Previous work has demonstrated that the pre-training in English does not transfer well to other languages in a zero-shot setting. However, multilingual pre-trained language models (MPLM) have excelled at a variety of single-modal language tasks. In this paper, we propose a simple yet efficient approach to adapt VLP to unseen languages using MPLM. We utilize a cross-lingual contextualized token embeddings alignment approach to train text encoders for non-English languages. Our approach does not require image input and primarily uses machine translation, eliminating the need for target language data. Our evaluation across three distinct tasks (image-text retrieval, visual entailment, and natural language visual reasoning) demonstrates that this approach outperforms the state-of-the-art multilingual vision-language models without requiring large parallel corpora. Our code is available at https://github.com/Yasminekaroui/CliCoTea.

Cite

CITATION STYLE

APA

Karoui, Y., Lebret, R., Foroutan, N., & Aberer, K. (2023). Stop Pre-Training: Adapt Visual-Language Models to Unseen Languages. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 366–375). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-short.32

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free