Mapping distributional semantics to property norms with deep neural networks

6Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Word embeddings have been very successful in many natural language processing tasks, but they characterize the meaning of a word/concept by uninterpretable “context signatures”. Such a representation can render results obtained using embeddings difficult to interpret. Neighboring word vectors may have similar meanings, but in what way are they similar? That similarity may represent a synonymy, metonymy, or even antonymy relation. In the cognitive psychology literature, in contrast, concepts are frequently represented by their relations with properties. These properties are produced by test subjects when asked to describe important features of concepts. As such, they form a natural, intuitive feature space. In this work, we present a neural-network-based method for mapping a distributional semantic space onto a human-built property space automatically. We evaluate our method on word embeddings learned with different types of contexts, and report state-of-the-art performances on the widely used McRae semantic feature production norms.

Cite

CITATION STYLE

APA

Li, D., & Summers-Stay, D. (2019). Mapping distributional semantics to property norms with deep neural networks. Big Data and Cognitive Computing, 3(2), 1–11. https://doi.org/10.3390/bdcc3020030

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free