Abstract
Mapping word embeddings of different languages into a single space has multiple applications. In order to map from a source space into a target space, a common approach is to learn a linear mapping that minimizes the distances between equivalences listed in a bilingual dictionary. In this paper, we propose a framework that generalizes previous work, provides an efficient exact method to learn the optimal linear transformation and yields the best bilingual results in translation induction while preserving monolingual performance in an analogy task.
Cite
CITATION STYLE
Artetxe, M., Labaka, G., & Agirre, E. (2016). Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 2289–2294). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d16-1250
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.