MUST-VQA: MUltilingual Scene-Text VQA

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we present a framework for Multilingual Scene Text Visual Question Answering that deals with new languages in a zero-shot fashion. Specifically, we consider the task of Scene Text Visual Question Answering (STVQA) in which the question can be asked in different languages and it is not necessarily aligned to the scene text language. Thus, we first introduce a natural step towards a more generalized version of STVQA: MUST-VQA. Accounting for this, we discuss two evaluation scenarios in the constrained setting, namely IID and zero-shot and we demonstrate that the models can perform on a par on a zero-shot setting. We further provide extensive experimentation and show the effectiveness of adapting multilingual language models into STVQA tasks.

Cite

CITATION STYLE

APA

Vivoli, E., Biten, A. F., Mafla, A., Karatzas, D., & Gomez, L. (2023). MUST-VQA: MUltilingual Scene-Text VQA. In Lecture Notes in Computer Science (Vol. 13804 LNCS, pp. 345–358). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-25069-9_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free