What do Models Learn From Training on More Than Text? Measuring Visual Commonsense Knowledge

2Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.
Get full text

Abstract

There are limitations in learning language from text alone. Therefore, recent focus has been on developing multimodal models. However, few benchmarks exist that can measure what language models learn about language from multimodal training. We hypothesize that training on a visual modality should improve on the visual commonsense knowledge in language models. Therefore, we introduce two evaluation tasks for measuring visual commonsense knowledge in language models1 and use them to evaluate different multimodal models and unimodal baselines. Primarily, we find that the visual commonsense knowledge is not significantly different between the multimodal models and unimodal baseline models trained on visual text data.

Cite

CITATION STYLE

APA

Hagström, L., & Johansson, R. (2022). What do Models Learn From Training on More Than Text? Measuring Visual Commonsense Knowledge. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 252–261). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-srw.19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free