This paper explores an incremental training strategy for the skip-gram model with negative sampling (SGNS) from both empirical and theoretical perspectives. Existing methods of neural word embeddings, including SGNS, are multi-pass algorithms and thus cannot perform incremental model update. To address this problem, we present a simple incremental extension of SGNS and provide a thorough theoretical analysis to demonstrate its validity. Empirical experiments demonstrated the correctness of the theoretical analysis as well as the practical usefulness of the incremental algorithm.
CITATION STYLE
Kaji, N., & Kobayashi, H. (2017). Incremental skip-gram model with negative sampling. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 363–371). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1037
Mendeley helps you to discover research relevant for your work.