An adversarial joint learning model for low-resource language semantic textual similarity

4Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Semantic Textual Similarity (STS) of low-resource language is a challenging research problem with practical applications. Traditional solutions employ machine translation techniques to translate the low-resource languages to some resource-rich languages such as English. Hence, the final performance is highly dependent on the quality of machine translation. To decouple the machine translation dependency while still take advantage of the data in resource-rich languages, this work proposes to jointly learn the low-resource language STS task and that of a resource-rich one, which only relies on multilingual word embeddings. In particular, we project the low-resource language word embeddings into the semantic space of the resource-rich language via a translation matrix. To make the projected word embeddings resemble that of the resource-rich language, a language discriminator is introduced as an adversarial teacher. Thus the parameters of sentence similarity neural networks of two tasks can be effectively shared. The plausibility of our model is demonstrated by extensive experimental results.

Cite

CITATION STYLE

APA

Tian, J., Lan, M., Wu, Y., Wang, J., Qiu, L., Li, S., … Si, L. (2018). An adversarial joint learning model for low-resource language semantic textual similarity. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10772 LNCS, pp. 89–101). Springer Verlag. https://doi.org/10.1007/978-3-319-76941-7_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free