Knowledge-Enhanced Bilingual Textual Representations for Cross-Lingual Semantic Textual Similarity

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Joint learning of words and entities is advantageous to various NLP tasks, while most of the works focus on single language setting. Cross-lingual representations learning receives high attention recently, but is still restricted by the availability of parallel data. In this paper, a method is proposed to jointly embed texts and entities on comparable data. In addition to evaluate on public semantic textual similarity datasets, a task (cross-lingual text extraction) was proposed to assess the similarities between texts and contribute to this dataset. It shows that the proposed method outperforms cross-lingual representations methods using parallel data on cross-lingual tasks, and achieves competitive results on mono-lingual tasks.

Cite

CITATION STYLE

APA

Lu, H., Cao, Y., Lei, H., & Li, J. (2019). Knowledge-Enhanced Bilingual Textual Representations for Cross-Lingual Semantic Textual Similarity. In Communications in Computer and Information Science (Vol. 1058, pp. 425–440). Springer Verlag. https://doi.org/10.1007/978-981-15-0118-0_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free