Combining Static and Contextualised Multilingual Embeddings

11Citations
Citations of this article
51Readers
Mendeley users who have this article in their library.

Abstract

Static and contextual multilingual embeddings have complementary strengths. Static embeddings, while less expressive than contextual language models, can be more straightforwardly aligned across multiple languages. We combine the strengths of static and contextual models to improve multilingual representations. We extract static embeddings for 40 languages from XLM-R, validate those embeddings with cross-lingual word retrieval, and then align them using VecMap. This results in high-quality, highly multilingual static embeddings. Then we apply a novel continued pre-training approach to XLM-R, leveraging the high quality alignment of our static embeddings to better align the representation space of XLM-R. We show positive results for multiple complex semantic tasks. We release the static embeddings and the continued pretraining code. Unlike most previous work, our continued pre-training approach does not require parallel text.

Cite

CITATION STYLE

APA

Hämmerl, K., Libovický, J., & Fraser, A. (2022). Combining Static and Contextualised Multilingual Embeddings. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 2316–2329). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-acl.182

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free