Approximating word ranking and negative sampling for word embedding

11Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

CBOW (Continuous Bag-Of-Words) is one of the most commonly used techniques to generate word embeddings in various NLP tasks. However, it fails to reach the optimal performance due to uniform involvements of positive words and a simple sampling distribution of negative words. To resolve these issues, we propose OptRank to optimize word ranking and approximate negative sampling for bettering word embedding. Specifically, we first formalize word embedding as a ranking problem. Then, we weigh the positive words by their ranks such that highly ranked words have more importance, and adopt a dynamic sampling strategy to select informative negative words. In addition, an approximation method is designed to efficiently compute word ranks. Empirical experiments show that OptRank consistently outperforms its counterparts on a benchmark dataset with different sampling scales, especially when the sampled subset is small. The code and datasets can be obtained from https://github.com/ouououououou/OptRank.

Cite

CITATION STYLE

APA

Guo, G., Ouyang, S., Yuan, F., & Wang, X. (2018). Approximating word ranking and negative sampling for word embedding. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 4092–4098). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/569

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free