Mapping unseen words to task-trained embedding spaces

6Citations
Citations of this article
134Readers
Mendeley users who have this article in their library.

Abstract

We consider the supervised training setting in which we learn task-specific word embeddings. We assume that we start with initial embeddings learned from unlabelled data and update them to learn taskspecific embeddings for words in the supervised training data. However, for new words in the test set, we must use either their initial embeddings or a single unknown embedding, which often leads to errors. We address this by learning a neural network to map from initial embeddings to the task-specific embedding space, via a multi-loss objective function. The technique is general, but here we demonstrate its use for improved dependency parsing (especially for sentences with out-of-vocabulary words), as well as for downstream improvements on sentiment analysis.

Cite

CITATION STYLE

APA

Madhyastha, P. S., Bansal, M., Gimpel, K., & Livescu, K. (2016). Mapping unseen words to task-trained embedding spaces. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 100–110). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w16-1612

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free