TRAVLR: Now You See It, Now You Don't! A Bimodal Dataset for Evaluating Visio-Linguisic Reasoning

0Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Numerous visio-linguistic (V+L) representation learning methods have been developed, yet existing datasets do not adequately evaluate the extent to which they represent visual and linguistic concepts in a unified space. We propose several novel evaluation settings for V+L models, including cross-modal transfer. Furthermore, existing V+L benchmarks often report global accuracy scores on the entire dataset, making it difficult to pinpoint the specific reasoning tasks that models fail and succeed at. We present TRAVLR, a synthetic dataset comprising four V+L reasoning tasks. TRAVLR's synthetic nature allows us to constrain its training and testing distributions along task-relevant dimensions, enabling the evaluation of out-of-distribution generalisation. Each example in TRAVLR redundantly encodes the scene in two modalities, allowing either to be dropped or added during training or testing without losing relevant information. We compare the performance of four state-ofthe-art V+L models, finding that while they perform well on test examples from the same modality, they all fail at cross-modal transfer and have limited success accommodating the addition or deletion of one modality. We release TRAVLR as an open challenge for the research community.

References Powered by Scopus

Microsoft COCO: Common objects in context

28816Citations
N/AReaders
Get full text

Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering

4013Citations
N/AReaders
Get full text

Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations

3731Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Chow, K. J., Tan, S., & Kan, M. Y. (2023). TRAVLR: Now You See It, Now You Don’t! A Bimodal Dataset for Evaluating Visio-Linguisic Reasoning. In EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 3314–3339). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.eacl-main.242

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 3

60%

Lecturer / Post doc 1

20%

Researcher 1

20%

Readers' Discipline

Tooltip

Computer Science 7

78%

Medicine and Dentistry 1

11%

Neuroscience 1

11%

Save time finding and organizing research with Mendeley

Sign up for free