Interactive Key-Value Memory-augmented Attention for Image Paragraph Captioning

15Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.

Abstract

Image paragraph captioning (IPC) aims to generate a fine-grained paragraph to describe the visual content of an image. Significant progress has been made by deep neural networks, in which the attention mechanism plays an essential role. However, conventional attention mechanisms tend to ignore the past alignment information, which often results in problems of repetitive captioning and incomplete captioning. In this paper, we propose an Interactive key-value Memory-augmented Attention model for image Paragraph captioning (IMAP) to keep track of the attention history (salient objects coverage information) along with the update-chain of the decoder state and therefore avoid generating repetitive or incomplete image descriptions. In addition, we employ an adaptive attention mechanism to realize adaptive alignment from image regions to caption words, where an image region can be mapped to an arbitrary number of caption words while a caption word can also attend to an arbitrary number of image regions. Extensive experiments on a benchmark dataset (i.e., Stanford) demonstrate the effectiveness of our IMAP model.

Cite

CITATION STYLE

APA

Xu, C., Li, Y., Li, C., Ao, X., Yang, M., & Tian, J. (2020). Interactive Key-Value Memory-augmented Attention for Image Paragraph Captioning. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 3132–3142). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.279

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free