Panoptic segmentation-based attention for image captioning

4Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

Abstract

Image captioning is the task of generating textual descriptions of images. In order to obtain a better image representation, attention mechanisms have been widely adopted in image captioning. However, in existing models with detection-based attention, the rectangular attention regions are not fine-grained, as they contain irrelevant regions (e.g., background or overlapped regions) around the object, making the model generate inaccurate captions. To address this issue, we propose panoptic segmentation-based attention that performs attention at a mask-level (i.e., the shape of the main part of an instance). Our approach extracts feature vectors from the corresponding segmentation regions, which is more fine-grained than current attention mechanisms. Moreover, in order to process features of different classes independently, we propose a dual-attention module which is generic and can be applied to other frameworks. Experimental results showed that our model could recognize the overlapped objects and understand the scene better. Our approach achieved competitive performance against state-of-the-art methods. We made our code available.

Cite

CITATION STYLE

APA

Cai, W., Xiong, Z., Sun, X., Rosin, P. L., Jin, L., & Peng, X. (2020). Panoptic segmentation-based attention for image captioning. Applied Sciences (Switzerland), 10(1). https://doi.org/10.3390/app10010391

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free