Analogy Training Multilingual Encoders

12Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Language encoders encode words and phrases in ways that capture their local semantic relatedness, but are known to be globally inconsistent. Global inconsistency can seemingly be corrected for, in part, by leveraging signals from knowledge bases, but previous results are partial and limited to monolingual English encoders. We extract a large-scale multilingual, multi-word analogy dataset from Wikidata for diagnosing and correcting for global inconsistencies and implement a four-way Siamese BERT architecture for grounding multilingual BERT (mBERT) in Wikidata through analogy training. We show that analogy training not only improves the global consistency of mBERT, as well as the isomorphism of language-specific subspaces, but also leads to significant gains on downstream tasks such as bilingual dictionary induction and sentence retrieval.

Cite

CITATION STYLE

APA

Garneau, N., Hartmann, M., Sandholm, A., Ruder, S., Vulić, I., & Søgaard, A. (2021). Analogy Training Multilingual Encoders. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 14B, pp. 12884–12892). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i14.17524

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free