Beyond Accuracy: A Consolidated Tool for Visual Question Answering Benchmarking

0Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

Abstract

On the way towards general Visual Question Answering (VQA) systems that are able to answer arbitrary questions, the need arises for evaluation beyond single-metric leaderboards for specific datasets. To this end, we propose a browser-based benchmarking tool for researchers and challenge organizers, with an API for easy integration of new models and datasets to keep up with the fast-changing landscape of VQA. Our tool helps test generalization capabilities of models across multiple datasets, evaluating not just accuracy, but also performance in more realistic real-world scenarios such as robustness to input noise. Additionally, we include metrics that measure biases and uncertainty, to further explain model behavior. Interactive filtering facilitates discovery of problematic behavior, down to the data sample level. As proof of concept, we perform a case study on four models. We find that state-of-the-art VQA models are optimized for specific tasks or datasets, but fail to generalize even to other in-domain test sets, for example they cannot recognize text in images. Our metrics allow us to quantify which image and question embeddings provide most robustness to a model. All code1 is publicly available.

Cite

CITATION STYLE

APA

Väth, D., Tilli, P., & Vu, N. T. (2021). Beyond Accuracy: A Consolidated Tool for Visual Question Answering Benchmarking. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations (pp. 114–123). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-demo.14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free