Learning better embeddings for rare words using distributional representations

9Citations
Citations of this article
128Readers
Mendeley users who have this article in their library.

Abstract

There are two main types of word representations: low-dimensional embeddings and high-dimensional distributional vectors, in which each dimension corresponds to a context word. In this paper, we initialize an embedding-learning model with distributional vectors. Evaluation on word similarity shows that this initialization significantly increases the quality of embeddings for rare words.

Cite

CITATION STYLE

APA

Sergienya, I., & Schütze, H. (2015). Learning better embeddings for rare words using distributional representations. In Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing (pp. 280–285). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d15-1033

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free