Check it again: Progressive visual question answering via visual entailment

39Citations
Citations of this article
88Readers
Mendeley users who have this article in their library.
Get full text

Abstract

While sophisticated Visual Question Answering models have achieved remarkable success, they tend to answer questions only according to superficial correlations between question and answer. Several recent approaches have been developed to address this language priors problem. However, most of them predict the correct answer according to one best output without checking the authenticity of answers. Besides, they only explore the interaction between image and question, ignoring the semantics of candidate answers. In this paper, we propose a select-and-rerank (SAR) progressive framework based on Visual Entailment. Specifically, we first select the candidate answers relevant to the question or the image, then we rerank the candidate answers by a visual entailment task, which verifies whether the image semantically entails the synthetic statement of the question and each candidate answer. Experimental results show the effectiveness of our proposed framework, which establishes a new state-of- the-art accuracy on VQA-CP v2 with a 7.55% improvement.

Cite

CITATION STYLE

APA

Si, Q., Lin, Z., Zheng, M., Fu, P., & Wang, W. (2021). Check it again: Progressive visual question answering via visual entailment. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (Vol. 1, pp. 4101–4110). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-long.317

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free