Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality

23Citations
Citations of this article
43Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent visuolinguistic pre-trained models show promising progress on various end tasks such as image retrieval and video captioning. Yet, they fail miserably on the recently proposed Winoground dataset (Thrush et al., 2022), which challenges models to match paired images and English captions, with items constructed to overlap lexically but differ in meaning (e.g., “there is a mug in some grass” vs. “there is some grass in a mug”). By annotating the dataset using new fine-grained tags, we show that solving the Winoground task requires not just compositional language understanding, but a host of other abilities like commonsense reasoning or locating small, out-of-focus objects in low-resolution images. In this paper, we identify the dataset's main challenges through a suite of experiments on related tasks (probing task, image retrieval task), data augmentation, and manual inspection of the dataset. Our analysis suggests that a main challenge in visuolinguistic models may lie in fusing visual and textual representations, rather than in compositional language understanding. We release our annotation and code at https://github.com/ajd12342/why-winoground-hard.

Cite

CITATION STYLE

APA

Diwan, A., Berry, L., Choi, E., Harwath, D., & Mahowald, K. (2022). Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 2236–2250). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.143

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free