Second-order contexts from lexical substitutes for few-shot learning of word representations

3Citations
Citations of this article
73Readers
Mendeley users who have this article in their library.

Abstract

There is a growing awareness of the need to handle rare and unseen words in word representation modelling. In this paper, we focus on few-shot learning of emerging concepts that fully exploits only a few available contexts. We introduce a substitute-based context representation technique that can be applied on an existing word embedding space. Previous context-based approaches to modelling unseen words only consider bag-of-word first-order contexts, whereas our method aggregates contexts as second-order substitutes that are produced by a sequence-aware sentence completion model. We experimented with three tasks that aim to test the modelling of emerging concepts. We found that these tasks show different emphasis on first and second order contexts, and our substitute-based method achieved superior performance on naturally-occurring contexts from corpora.

Cite

CITATION STYLE

APA

Liu, Q., McCarthy, D., & Korhonen, A. (2019). Second-order contexts from lexical substitutes for few-shot learning of word representations. In *SEM@NAACL-HLT 2019 - 8th Joint Conference on Lexical and Computational Semantics (pp. 61–67). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/s19-1007

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free