HaVQA: A Dataset for Visual Question Answering and Multimodal Research in Hausa Language

1Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

This paper presents HaVQA, the first multimodal dataset for visual question-answering (VQA) tasks in the Hausa language. The dataset was created by manually translating 6,022 English question-answer pairs, which are associated with 1,555 unique images from the Visual Genome dataset. As a result, the dataset provides 12,044 gold standard English-Hausa parallel sentences that were translated in a fashion that guarantees their semantic match with the corresponding visual information. We conducted several baseline experiments on the dataset, including visual question answering, visual question elicitation, text-only and multimodal machine translation.

References Powered by Scopus

Deep residual learning for image recognition

174329Citations
N/AReaders
Get full text

Microsoft COCO: Common objects in context

28859Citations
N/AReaders
Get full text

Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations

3736Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Parida, S., Abdulmumin, I., Muhammad, S. H., Bose, A., Kohli, G. S., Ahmad, I. S., … Kakudi, H. A. (2023). HaVQA: A Dataset for Visual Question Answering and Multimodal Research in Hausa Language. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 10162–10183). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.646

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 3

50%

Lecturer / Post doc 2

33%

Researcher 1

17%

Readers' Discipline

Tooltip

Computer Science 8

89%

Medicine and Dentistry 1

11%

Save time finding and organizing research with Mendeley

Sign up for free