Learning compositionality functions on word embeddings for modelling attribute meaning in adjective-noun phrases

23Citations
Citations of this article
98Readers
Mendeley users who have this article in their library.

Abstract

Word embeddings have been shown to be highly effective in a variety of lexical semantic tasks. They tend to capture meaningful relational similarities between individual words, at the expense of lacking the capabilty of making the underlying semantic relation explicit. In this paper, we investigate the attribute relation that often holds between the constituents of adjective-noun phrases. We use CBOW word embeddings to represent word meaning and learn a compositionality function that combines the individual constituents into a phrase representation, thus capturing the compositional attribute meaning. The resulting embedding model, while being fully interpretable, outperforms countbased distributional vector space models that are tailored to attribute meaning in the two tasks of attribute selection and phrase similarity prediction. Moreover, as the model captures a generalized layer of attribute meaning, it bears the potential to be used for predictions over various attribute inventories without re-training.

Cite

CITATION STYLE

APA

Hartung, M., Kaupmann, F., Jebbara, S., & Cimiano, P. (2017). Learning compositionality functions on word embeddings for modelling attribute meaning in adjective-noun phrases. In 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017 - Proceedings of Conference (Vol. 1, pp. 54–64). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/e17-1006

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free