Integrating distributional lexical contrast into word embeddings for antonym-synonym distinction

79Citations
Citations of this article
176Readers
Mendeley users who have this article in their library.

Abstract

We propose a novel vector representation that integrates lexical contrast into distributional vectors and strengthens the most salient features for determining degrees of word similarity. The improved vectors significantly outperform standard models and distinguish antonyms from synonyms with an average precision of 0.66-0.76 across word classes (adjectives, nouns, verbs). Moreover, we integrate the lexical contrast vectors into the objective function of a skip-gram model. The novel embedding outperforms state-of-the-art models on predicting word similarities in SimLex-999, and on distinguishing antonyms from synonyms.

Cite

CITATION STYLE

APA

Nguyen, K. A., Im Walde, S. S., & Vu, N. T. (2016). Integrating distributional lexical contrast into word embeddings for antonym-synonym distinction. In 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Short Papers (pp. 454–459). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p16-2074

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free