Improving Distributional Similarity with Lessons Learned from Word Embeddings

  • Levy O
  • Goldberg Y
  • Dagan I
N/ACitations
Citations of this article
1.6kReaders
Mendeley users who have this article in their library.

Abstract

Recent trends suggest that neural-network-inspired word embedding models outperform traditional count-based distributional models on word similarity and analogy detection tasks. We reveal that much of the performance gains of word embeddings are due to certain system design choices and hyperparameter optimizations, rather than the embedding algorithms themselves. Furthermore, we show that these modifications can be transferred to traditional distributional models, yielding similar gains. In contrast to prior reports, we observe mostly local or insignificant performance differences between the methods, with no global advantage to any single approach over the others.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Levy, O., Goldberg, Y., & Dagan, I. (2015). Improving Distributional Similarity with Lessons Learned from Word Embeddings. Transactions of the Association for Computational Linguistics, 3, 211–225. https://doi.org/10.1162/tacl_a_00134

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 807

72%

Researcher 225

20%

Professor / Associate Prof. 54

5%

Lecturer / Post doc 35

3%

Readers' Discipline

Tooltip

Computer Science 944

83%

Engineering 98

9%

Linguistics 65

6%

Mathematics 33

3%

Article Metrics

Tooltip
Mentions
News Mentions: 3
References: 4

Save time finding and organizing research with Mendeley

Sign up for free