Image captioning models tend to describe images in an object-centric way, emphasising visible objects. But image descriptions can also abstract away from objects and describe the type of scene depicted. In this paper, we explore the potential of a state of the art Vision and Language model, VinVL, to caption images at the scene level using (1) a novel dataset which pairs images with both object-centric and scene descriptions. Through (2) an in-depth analysis of the effect of the fine-tuning, we show (3) that a small amount of curated data suffices to generate scene descriptions without losing the capability to identify object-level concepts in the scene; the model acquires a more holistic view of the image compared to when object-centric descriptions are generated. We discuss the parallels between these results and insights from computational and cognitive science research on scene perception.
CITATION STYLE
Cafagna, M., van Deemter, K., & Gatt, A. (2022). Understanding Cross-modal Interactions in V&L Models that Generate Scene Descriptions. In UM-IoS 2022 - Unimodal and Multimodal Induction of Linguistic Structures, Proceedings of the Workshop (pp. 56–72). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.umios-1.6
Mendeley helps you to discover research relevant for your work.