A targeted retraining scheme of unsupervised word embeddings for specific supervised tasks

3Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper proposes a simple retraining scheme to purposefully adjust unsupervised word embeddings for specific supervised tasks, such as sentence classification. Different from the current methods, which fine-tune word embeddings in training set through the supervised learning procedure, our method treats the labels of task as implicit context information to retrain word embeddings, so that every required word for the intended task obtains task-specific representation. Moreover, because our method is independent of the supervised learning process, it has less risk of over-fitting. We have validated the rationality of our method on various sentence classification tasks. The improvements of accuracy are remarkable, when only scarce training set is available.

Cite

CITATION STYLE

APA

Qin, P., Xu, W., & Guo, J. (2017). A targeted retraining scheme of unsupervised word embeddings for specific supervised tasks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10235 LNAI, pp. 3–14). Springer Verlag. https://doi.org/10.1007/978-3-319-57529-2_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free