Finding Structural Knowledge in Multimodal-BERT

7Citations
Citations of this article
68Readers
Mendeley users who have this article in their library.

Abstract

In this work, we investigate the knowledge learned in the embeddings of multimodal-BERT models. More specifically, we probe their capabilities of storing the grammatical structure of linguistic data and the structure learned over objects in visual data. To reach that goal, we first make the inherent structure of language and visuals explicit by a dependency parse of the sentences that describe the image and by the dependencies between the object regions in the image, respectively. We call this explicit visual structure the scene tree, that is based on the dependency tree of the language description. Extensive probing experiments show that the multimodal-BERT models do not encode these scene trees. Code available at https://github.com/VSJMilewski/multimodal-probes.

Cite

CITATION STYLE

APA

Milewski, V., de Lhoneux, M., & Moens, M. F. (2022). Finding Structural Knowledge in Multimodal-BERT. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 5658–5671). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.388

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free