TextCaps: A Dataset for Image Captioning with Reading Comprehension

69Citations
Citations of this article
179Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Image descriptions can help visually impaired people to quickly understand the image content. While we made significant progress in automatically describing images and optical character recognition, current approaches are unable to include written text in their descriptions, although text is omnipresent in human environments and frequently critical to understand our surroundings. To study how to comprehend text in the context of an image we collect a novel dataset, TextCaps, with 145k captions for 28k images. Our dataset challenges a model to recognize text, relate it to its visual context, and decide what part of the text to copy or paraphrase, requiring spatial, semantic, and visual reasoning between multiple text tokens and visual entities, such as objects. We study baselines and adapt existing approaches to this new task, which we refer to as image captioning with reading comprehension. Our analysis with automatic and human studies shows that our new TextCaps dataset provides many new technical challenges over previous datasets.

Cite

CITATION STYLE

APA

Sidorov, O., Hu, R., Rohrbach, M., & Singh, A. (2020). TextCaps: A Dataset for Image Captioning with Reading Comprehension. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12347 LNCS, pp. 742–758). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58536-5_44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free