R3Net:Relation-embedded Representation Reconstruction Network for Change Captioning

13Citations
Citations of this article
58Readers
Mendeley users who have this article in their library.

Abstract

Change captioning is to use a natural language sentence to describe the fine-grained disagreement between two similar images. Viewpoint change is the most typical distractor in this task, because it changes the scale and location of the objects and overwhelms the representation of real change. In this paper, we propose a Relation-embedded Representation Reconstruction Network (R3Net) to explicitly distinguish the real change from the large amount of clutter and irrelevant changes. Specifically, a relation-embedded module is first devised to explore potential changed objects in the large amount of clutter. Then, based on the semantic similarities of corresponding locations in the two images, a representation reconstruction module (RRM) is designed to learn the reconstruction representation and further model the difference representation. Besides, we introduce a syntactic skeleton predictor (SSP) to enhance the semantic interaction between change localization and caption generation. Extensive experiments show that the proposed method achieves the state-of-the-art results on two public datasets.

Cite

CITATION STYLE

APA

Tu, Y., Li, L., Yan, C., Gao, S., & Yu, Z. (2021). R3Net:Relation-embedded Representation Reconstruction Network for Change Captioning. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 9319–9329). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.735

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free