Resolving references to objects in photographs using the words-as-classifiers model

28Citations
Citations of this article
114Readers
Mendeley users who have this article in their library.

Abstract

A common use of language is to refer to visually present objects. Modelling it in computers requires modelling the link between language and perception. The "words as classifiers" model of grounded semantics views words as classifiers of perceptual contexts, and composes the meaning of a phrase through composition of the denotations of its component words. It was recently shown to perform well in a game-playing scenario with a small number of object types. We apply it to two large sets of real-world photographs that contain a much larger variety of object types and for which referring expressions are available. Using a pre-trained convolutional neural network to extract image region features, and augmenting these with positional information, we show that the model achieves performance competitive with the state of the art in a reference resolution task (given expression, find bounding box of its referent), while, as we argue, being conceptually simpler and more flexible.

Cite

CITATION STYLE

APA

Schlangen, D., Zarrieß, S., & Kennington, C. (2016). Resolving references to objects in photographs using the words-as-classifiers model. In 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers (Vol. 2, pp. 1213–1223). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p16-1115

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free