Grounding co-occurrence: Identifying features in a lexical co-occurrence model of semantic memory

23Citations
Citations of this article
54Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Lexical co-occurrence models of semantic memory represent word meaning by vectors in a high-dimensional space. These vectors are derived from word usage, as found in a large corpus of written text Typically, these models are fully automated, an advantage over models that represent semantics that are based on human judgments (e.g., feature-based models). A common criticism of co-occurrence models is that the representations are not grounded: Concepts exist only relative to each other in the space produced by the model. It has been claimed that feature-based models offer an advantage in this regard. In this article, we take a step toward grounding a co occurrence model. A feed-forward neural network is trained using back propagation to provide a mapping from co-occurrence vectors to feature norms collected from subjects. We show that this network is able to retrieve the features of a concept from its co-occurrence vector with high accuracy and is able to generalize this ability to produce an appropriate list of features from the co-occurrence vector of a novel concept. © 2009 The Psychonomic Society, Inc.

Cite

CITATION STYLE

APA

Buchanan, L., Dura, K., & Caron, R. (2009). Grounding co-occurrence: Identifying features in a lexical co-occurrence model of semantic memory. Behavior Research Methods, 41(4), 1210–1223. https://doi.org/10.3758/BRM.41.4.1210

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free