Improving Word Embeddings through Iterative Refinement of Word- and Character-level Models

4Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.

Abstract

Embedding of rare and out-of-vocabulary (OOV) words is an important open NLP problem. A popular solution is to train a character-level neural network to reproduce the embeddings from a standard word embedding model. The trained network is then used to assign vectors to any input string, including OOV and rare words. We enhance this approach and introduce an algorithm that iteratively refines and improves both word- and character-level models. We demonstrate that our method outperforms the existing algorithms on 5 word similarity data sets, and that it can be successfully applied to job title normalization, an important problem in the e-recruitment domain that suffers from the OOV problem.

Cite

CITATION STYLE

APA

Ha, P., Zhang, S., Djuric, N., & Vucetic, S. (2020). Improving Word Embeddings through Iterative Refinement of Word- and Character-level Models. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 1204–1213). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.104

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free