Improving word embeddings via combining with complementary languages

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Word embeddings have recently been demonstrated outstanding results across various NLP tasks. However, most existing word embeddings learning methods employ mono-lingual corpus without exploiting the linguistic relationship among languages. In this paper, we introduce a novel CCL (Combination with Complementary Languages) method to improve word embeddings. Under this method, one word embeddings are replaced by its center word embeddings, which is obtained by combining with the corresponding word embeddings in other different languages. We apply our method to several baseline models and evaluate the quality of word embeddings on word similarity task across two benchmark datasets. Despite its simplicity, the results show that our method is surprisingly effective in capturing semantic information, and outperforms baselines by a large margin, at most 20 Spearman rank correlation (ρ x 100). © 2014 Springer International Publishing.

Cite

CITATION STYLE

APA

Li, C., Xu, B., Wu, G., Zhuang, T., Wang, X., & Ge, W. (2014). Improving word embeddings via combining with complementary languages. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8436 LNAI, pp. 313–318). Springer Verlag. https://doi.org/10.1007/978-3-319-06483-3_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free