Bilingual lexicon extraction has been studied for decades and most previous methods have relied on parallel corpora or bilingual dictionaries. Recent studies have shown that it is possible to build a bilingual dictionary by aligning monolingual word embedding spaces in an unsupervised way. With the recent advances in generative models, we propose a novel approach which builds cross-lingual dictionaries via latent variable models and adversarial training with no parallel corpora. To demonstrate the effectiveness of our approach, we evaluate our approach on several language pairs and the experimental results show that our model could achieve competitive and even superior performance compared with several state-of-the-art models.
CITATION STYLE
Dou, Z. Y., Zhou, Z. H., & Huang, S. (2018). Unsupervised bilingual lexicon induction via latent variable models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 (pp. 621–626). Association for Computational Linguistics. https://doi.org/10.18653/v1/d18-1062
Mendeley helps you to discover research relevant for your work.