SOrT-ing VQA Models: Contrastive Gradient Learning for Improved Consistency

1Citations
Citations of this article
67Readers
Mendeley users who have this article in their library.

Abstract

Recent research in Visual Question Answering (VQA) has revealed state-of-the-art models to be inconsistent in their understanding of the world – they answer seemingly difficult questions requiring reasoning correctly but get simpler associated sub-questions wrong. These sub-questions pertain to lower level visual concepts in the image that models ideally should understand to be able to answer the reasoning question correctly. To address this, we first present a gradient-based inter-pretability approach to determine the questions most strongly correlated with the reasoning question on an image, and use this to evaluate VQA models on their ability to identify the relevant sub-questions needed to answer a reasoning question. Next, we propose a contrastive gradient learning based approach called Sub-question Oriented Tuning (SOrT) which encourages models to rank relevant sub-questions higher than irrelevant questions for an pair. We show that SOrT improves model consistency by up to 6.5% points over existing approaches, while also improving visual grounding and robustness to rephrasings of questions.

Cite

CITATION STYLE

APA

Dharur, S., Tendulkar, P., Batra, D., Parikh, D., & Selvaraju, R. R. (2021). SOrT-ing VQA Models: Contrastive Gradient Learning for Improved Consistency. In NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 3103–3111). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.naacl-main.248

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free