Multilingual vision-language (V&L) pretraining has achieved remarkable progress in learning universal representations across different modalities and languages. In spite of recent success, there still remain challenges limiting further improvements of V&L pre-trained models in multilingual settings. Particularly, current V&L pre-training methods rely heavily on strictly-aligned multilingual image-text pairs generated from English-centric datasets through machine translation. However, the cost of collecting and translating such strictly-aligned datasets is usually unbearable. In this paper, we propose Regularized Contrastive Cross-lingual Cross-modal (RC3) pre-training, which further exploits more abundant weakly-aligned multilingual image-text pairs. Specifically, we design a regularized cross-lingual visio-textual contrastive learning objective that constrains the representation proximity of weakly-aligned visio-textual inputs according to textual relevance. Besides, existing V&L pre-training approaches mainly deal with visual inputs by either region-of-interest (ROI) features or patch embeddings. We flexibly integrate the two forms of visual features into our model for pre-training and downstream multimodal tasks. Extensive experiments on 5 downstream multi-modal tasks across 6 languages demonstrate the effectiveness of our proposed method over competitive contrast models with stronger zero-shot capability.
CITATION STYLE
Zhou, C., Liang, Y., Meng, F., Xu, J., Su, J., & Zhou, J. (2023). RC3: Regularized Contrastive Cross-lingual Cross-modal Pre-training. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 11747–11762). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.746
Mendeley helps you to discover research relevant for your work.