There are two main types of word representations: low-dimensional embeddings and high-dimensional distributional vectors, in which each dimension corresponds to a context word. In this paper, we initialize an embedding-learning model with distributional vectors. Evaluation on word similarity shows that this initialization significantly increases the quality of embeddings for rare words.
CITATION STYLE
Sergienya, I., & Schütze, H. (2015). Learning better embeddings for rare words using distributional representations. In Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing (pp. 280–285). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d15-1033
Mendeley helps you to discover research relevant for your work.