Revisiting Skip-Gram Negative Sampling Model with Rectification

0Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We revisit skip-gram negative sampling (SGNS), one of the most popular neural-network based approaches to learning distributed word representation. We first point out the ambiguity issue undermining the SGNS model, in the sense that the word vectors can be entirely distorted without changing the objective value. To resolve the issue, we investigate the intrinsic structures in solution that a good word embedding model should deliver. Motivated by this, we rectify the SGNS model with quadratic regularization, and show that this simple modification suffices to structure the solution in the desired manner. A theoretical justification is presented, which provides novel insights into quadratic regularization. Preliminary experiments are also conducted on Google’s analytical reasoning task to support the modified SGNS model.

Cite

CITATION STYLE

APA

Mu, C. (Matthew), Yang, G., & Zheng, Y. (John). (2019). Revisiting Skip-Gram Negative Sampling Model with Rectification. In Advances in Intelligent Systems and Computing (Vol. 997, pp. 485–497). Springer Verlag. https://doi.org/10.1007/978-3-030-22871-2_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free