RefCap: image captioning with referent objects attributes

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In recent years, significant progress has been made in visual-linguistic multi-modality research, leading to advancements in visual comprehension and its applications in computer vision tasks. One fundamental task in visual-linguistic understanding is image captioning, which involves generating human-understandable textual descriptions given an input image. This paper introduces a referring expression image captioning model that incorporates the supervision of interesting objects. Our model utilizes user-specified object keywords as a prefix to generate specific captions that are relevant to the target object. The model consists of three modules including: (i) visual grounding, (ii) referring object selection, and (iii) image captioning modules. To evaluate its performance, we conducted experiments on the RefCOCO and COCO captioning datasets. The experimental results demonstrate that our proposed method effectively generates meaningful captions aligned with users’ specific interests.

Cite

CITATION STYLE

APA

Park, S., & Paik, J. (2023). RefCap: image captioning with referent objects attributes. Scientific Reports, 13(1). https://doi.org/10.1038/s41598-023-48916-6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free