AdaVQA: Overcoming Language Priors with Adapted Margin Cosine Loss

24Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A number of studies point out that current Visual Question Answering (VQA) models are severely affected by the language prior problem, which refers to blindly making predictions based on the language shortcut. Some efforts have been devoted to overcoming this issue with delicate models. However, there is no research to address it from the view of the answer feature space learning, despite the fact that existing VQA methods all cast VQA as a classification task. Inspired by this, in this work, we attempt to tackle the language prior problem from the viewpoint of the feature space learning. An adapted margin cosine loss is designed to discriminate the frequent and the sparse answer feature space under each question type properly. In this way, the limited patterns within the language modality can be largely reduced to eliminate the language priors. We apply this loss function to several baseline models and evaluate its effectiveness on two VQA-CP benchmarks. Experimental results demonstrate that our proposed adapted margin cosine loss can enhance the baseline models with an absolute performance gain of 15% on average, strongly verifying the potential of tackling the language prior problem in VQA from the angle of the answer feature space learning.

Cite

CITATION STYLE

APA

Guo, Y., Nie, L., Cheng, Z., Ji, F., Zhang, J., & Del Bimbo, A. (2021). AdaVQA: Overcoming Language Priors with Adapted Margin Cosine Loss. In IJCAI International Joint Conference on Artificial Intelligence (pp. 708–714). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/98

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free