Learning grounded word meaning representations on similarity graphs

1Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

Abstract

This paper introduces a novel approach to learn visually grounded meaning representations of words as low-dimensional node embeddings on an underlying graph hierarchy. The lower level of the hierarchy models modality-specific word representations through dedicated but communicating graphs, while the higher level puts these representations together on a single graph to learn a representation jointly from both modalities. The topology of each graph models similarity relations among words, and is estimated jointly with the graph embedding. The assumption underlying this model is that words sharing similar meaning correspond to communities in an underlying similarity graph in a low-dimensional space. We named this model Hierarchical Multi-Modal Similarity Graph Embedding (HM-SGE). Experimental results validate the ability of HM-SGE to simulate human similarity judgements and concept categorization, outperforming the state of the art.

Cite

CITATION STYLE

APA

Dimiccoli, M., Wendt, H., & Batlle, P. (2021). Learning grounded word meaning representations on similarity graphs. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 4760–4769). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.391

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free