VIXEN: Visual Text Comparison Network for Image Difference Captioning

4Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

We present VIXEN – a technique that succinctly summarizes in text the visual differences between a pair of images in order to highlight any content manipulation present. Our proposed network linearly maps image features in a pairwise manner, constructing a soft prompt for a pretrained large language model. We address the challenge of low volume of training data and lack of manipulation variety in existing image difference captioning (IDC) datasets by training on synthetically manipulated images from the recent InstructPix2Pix dataset generated via prompt-to-prompt editing framework. We augment this dataset with change summaries produced via GPT-3. We show that VIXEN produces state-of-the-art, comprehensible difference captions for diverse image contents and edit types, offering a potential mitigation against misinformation disseminated via manipulated image content. Code and data are available at http://github.com/alexblck/vixen.

Cite

CITATION STYLE

APA

Black, A., Shi, J., Fan, Y., Bui, T., & Collomosse, J. (2024). VIXEN: Visual Text Comparison Network for Image Difference Captioning. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 846–854). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i2.27843

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free