Vision-and-Language Pretrained Models: A Survey

25Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.

Abstract

Pretrained models have produced great success in both Computer Vision (CV) and Natural Language Processing (NLP). This progress leads to learning joint representations of vision and language pretraining by feeding visual and linguistic contents into a multi-layer transformer, Visual-Language Pretrained Models (VLPMs). In this paper, we present an overview of the major advances achieved in VLPMs for producing joint representations of vision and language. As the preliminaries, we briefly describe the general task definition and genetic architecture of VLPMs. We first discuss the language and vision data encoding methods and then present the mainstream VLPM structure as the core content. We further summarise several essential pretraining and fine-tuning strategies. Finally, we highlight three future directions for both CV and NLP researchers to provide insightful guidance.

Cite

CITATION STYLE

APA

Long, S., Cao, F., Han, S. C., & Yang, H. (2022). Vision-and-Language Pretrained Models: A Survey. In IJCAI International Joint Conference on Artificial Intelligence (pp. 5530–5537). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/773

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free