ICECAP: Information Concentrated Entity-aware Image Captioning

18Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Most current image captioning systems focus on describing general image content, and lack background knowledge to deeply understand the image, such as exact named entities or concrete events. In this work, we focus on the entity-aware news image captioning task which aims to generate informative captions by leveraging the associated news articles to provide background knowledge about the target image. However, due to the length of news articles, previous works only employ news articles at the coarse article or sentence level, which are not fine-grained enough to refine relevant events and choose named entities accurately. To overcome these limitations, we propose an Information Concentrated Entity-aware news image CAPtioning (ICECAP) model, which progressively concentrates on relevant textual information within the corresponding news article from the sentence level to the word level. Our model first creates coarse concentration on relevant sentences using a cross-modality retrieval model and then generates captions by further concentrating on relevant words within the sentences. Extensive experiments on both BreakingNews and GoodNews datasets demonstrate the effectiveness of our proposed method, which outperforms other state-of-the-arts.

Cite

CITATION STYLE

APA

Hu, A., Chen, S., & Jin, Q. (2020). ICECAP: Information Concentrated Entity-aware Image Captioning. In MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia (pp. 4217–4225). Association for Computing Machinery, Inc. https://doi.org/10.1145/3394171.3413576

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free