OBJ2TEXT: Generating visually descriptive language from object layouts

35Citations
Citations of this article
140Readers
Mendeley users who have this article in their library.

Abstract

Generating captions for images is a task that has recently received considerable attention. In this work we focus on caption generation for abstract scenes, or object layouts where the only information provided is a set of objects and their locations. We propose OBJ2TEXT, a sequence-to-sequence model that encodes a set of objects and their locations as an input sequence using an LSTM network, and decodes this representation using an LSTM language model. We show that our model, despite encoding object layouts as a sequence, can represent spatial relationships between objects, and generate descriptions that are globally coherent and semantically relevant. We test our approach in a task of object-layout captioning by using only object annotations as inputs. We additionally show that our model, combined with a state-of-the-art object detector, improves an image captioning model from 0.863 to 0.950 (CIDEr score) in the test benchmark of the standard MS-COCO Captioning task.

Cite

CITATION STYLE

APA

Yin, X., & Ordonez, V. (2017). OBJ2TEXT: Generating visually descriptive language from object layouts. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 177–187). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1017

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free