Distributional semantics models are known to struggle with small data. It is generally accepted that in order to learn ‘a good vector’ for a word, a model must have sufficient examples of its usage. This contradicts the fact that humans can guess the meaning of a word from a few occurrences only. In this paper, we show that a neural language model such as Word2Vec only necessitates minor modifications to its standard architecture to learn new terms from tiny data, using background knowledge from a previously learnt semantic space. We test our model on word definitions and on a nonce task involving 2-6 sentences’ worth of context, showing a large increase in performance over state-of-the-art models on the definitional task.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Herbelot, A., & Baroni, M. (2017). High-risk learning: Acquiring new word vectors from tiny data. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 304–309). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1030