COIN: Counterfactual Image Generation for Visual Question Answering Interpretation

5Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Due to the significant advancement of Natural Language Processing and Computer Vision-based models, Visual Question Answering (VQA) systems are becoming more intelligent and ad-vanced. However, they are still error-prone when dealing with relatively complex questions. There-fore, it is important to understand the behaviour of the VQA models before adopting their results. In this paper, we introduce an interpretability approach for VQA models by generating counterfactual images. Specifically, the generated image is supposed to have the minimal possible change to the original image and leads the VQA model to give a different answer. In addition, our approach ensures that the generated image is realistic. Since quantitative metrics cannot be employed to evaluate the interpretability of the model, we carried out a user study to assess different aspects of our approach. In addition to interpreting the result of VQA models on single images, the obtained results and the discussion provides an extensive explanation of VQA models’ behaviour.

Author supplied keywords

Cite

CITATION STYLE

APA

Boukhers, Z., Hartmann, T., & Jürjens, J. (2022). COIN: Counterfactual Image Generation for Visual Question Answering Interpretation. Sensors, 22(6). https://doi.org/10.3390/s22062245

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free