The BLA Benchmark: Investigating Basic Language Abilities of Pre-Trained Multimodal Models

3Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Despite the impressive performance achieved by pre-trained language-and-vision models in downstream tasks, it remains an open question whether this reflects a proper understanding of image-text interaction. In this work, we explore to what extent they handle basic linguistic constructions-active-passive voice, coordination, and relative clauses-that even preschool children can typically master. We present BLA, a novel, automatically constructed benchmark to evaluate multimodal models on these Basic Language Abilities. We show that different types of Transformer-based systems, such as CLIP, ViLBERT, and BLIP2, generally struggle with BLA in a zero-shot setting, in line with previous findings. Our experiments, in particular, show that most of the tested models only marginally benefit when fine-tuned or prompted with construction-specific samples. Yet, the generative BLIP2 shows promising trends, especially in an in-context learning setting. This opens the door to using BLA not only as an evaluation benchmark but also to improve models' basic language abilities.

Cite

CITATION STYLE

APA

Chen, X., Fernández, R., & Pezzelle, S. (2023). The BLA Benchmark: Investigating Basic Language Abilities of Pre-Trained Multimodal Models. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 5817–5830). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.356

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free