A Large-Scale Multilingual Study of Visual Constraints on Linguistic Selection of Descriptions

4Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a large, multilingual study into how vision constrains linguistic choice, covering four languages and five linguistic properties, such as verb transitivity or use of numerals. We propose a novel method that leverages existing corpora of images with captions written by native speakers, and apply it to nine corpora, comprising 600k images and 3M captions. We study the relation between visual input and linguistic choices by training classifiers to predict the probability of expressing a property from raw images, and find evidence supporting the claim that linguistic properties are constrained by visual context across languages. We complement this investigation with a corpus study, taking the test case of numerals. Specifically, we use existing annotations (number or type of objects) to investigate the effect of different visual conditions on the use of numeral expressions in captions, and show that similar patterns emerge across languages. Our methods and findings both confirm and extend existing research in the cognitive literature. We additionally discuss possible applications for language generation. We make our codebase publicly available.

Cite

CITATION STYLE

APA

Berger, U., Frermann, L., Stanovsky, G., & Abend, O. (2023). A Large-Scale Multilingual Study of Visual Constraints on Linguistic Selection of Descriptions. In EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Findings of EACL 2023 (pp. 2240–2254). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-eacl.172

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free