Learning conceptual spaces with disentangled facets

6Citations
Citations of this article
74Readers
Mendeley users who have this article in their library.

Abstract

Conceptual spaces are geometric representations of meaning that were proposed by Gärdenfors (2000). They share many similarities with the vector space embeddings that are commonly used in natural language processing. However, rather than representing entities in a single vector space, conceptual spaces are usually decomposed into several facets, each of which is then modelled as a relatively low-dimensional vector space. Unfortunately, the problem of learning such conceptual spaces has thus far only received limited attention. To address this gap, we analyze how, and to what extent, a given vector space embedding can be decomposed into meaningful facets in an unsupervised fashion. While this problem is highly challenging, we show that useful facets can be discovered by relying on word embeddings to group semantically related features.

Cite

CITATION STYLE

APA

Alshaikh, R., Bouraoui, Z., & Schockaert, S. (2019). Learning conceptual spaces with disentangled facets. In CoNLL 2019 - 23rd Conference on Computational Natural Language Learning, Proceedings of the Conference (pp. 131–139). Association for Computational Linguistics. https://doi.org/10.18653/v1/K19-1013

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free