SubGram: Extending skip-gram word representation with substrings

3Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Skip-gram (word2vec) is a recent method for creating vector representations of words (“distributed word representations”) using a neural network. The representation gained popularity in various areas of natural language processing, because it seems to capture syntactic and semantic information about words without any explicit supervision in this respect. We propose SubGram, a refinement of the Skip-gram model to consider also the word structure during the training process, achieving large gains on the Skip-gram original test set.

Cite

CITATION STYLE

APA

Kocmi, T., & Bojar, O. (2016). SubGram: Extending skip-gram word representation with substrings. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9924 LNCS, pp. 182–189). Springer Verlag. https://doi.org/10.1007/978-3-319-45510-5_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free