Learning multi-modal word representation grounded in visual context

22Citations
Citations of this article
59Readers
Mendeley users who have this article in their library.

Abstract

Representing the semantics of words is a long-standing problem for the natural language processing community. Most methods compute word semantics given their textual context in large corpora. More recently, researchers attempted to integrate perceptual and visual features. Most of these works consider the visual appearance of objects to enhance word representations but they ignore the visual environment and context in which objects appear. We propose to unify text-based techniques with vision-based techniques by simultaneously leveraging textual and visual context to learn multimodal word embeddings. We explore various choices for what can serve as a visual context and present an end-to-end method to integrate visual context elements in a multimodal skip-gram model. We provide experiments and extensive analysis of the obtained results.

Cite

CITATION STYLE

APA

Zablocki, É., Piwowarski, B., Soulier, L., & Gallinari, P. (2018). Learning multi-modal word representation grounded in visual context. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 5626–5633). AAAI press. https://doi.org/10.1609/aaai.v32i1.11939

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free