Learning grounded meaning representations with autoencoders

189Citations
Citations of this article
317Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we address the problem of grounding distributional representations of lexical meaning. We introduce a new model which uses stacked autoencoders to learn higher-level embeddings from textual and visual input. The two modalities are encoded as vectors of attributes and are obtained automatically from text and images, respectively. We evaluate our model on its ability to simulate similarity judgments and concept categorization. On both tasks, our approach outperforms baselines and related models. © 2014 Association for Computational Linguistics.

Cite

CITATION STYLE

APA

Silberer, C., & Lapata, M. (2014). Learning grounded meaning representations with autoencoders. In 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014 - Proceedings of the Conference (Vol. 1, pp. 721–732). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/p14-1068

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free