Effective End-to-End Vision Language Pretraining With Semantic Visual Loss

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Current vision language pretraining models are dominated by methods using region visual features extracted from object detectors. Given their good performance, the extract-then-process pipeline significantly restricts the inference speed and therefore limits their real-world use cases. However, training vision language models from raw image pixels is difficult, as the raw image pixels give much less prior knowledge than region features. In this paper, we systematically study how to leverage auxiliary visual pretraining tasks to help training end-to-end vision language models. We introduce three types of visual losses that enable much faster convergence and better finetuning accuracy. Compared with region feature models, our end-to-end models could achieve similar or better performance on down-stream tasks and run more than 10 times faster during inference. Compared with other end-to-end models, our proposed method could achieve similar or better performance when pretrained for only 10% of the pretraining GPU hours.

Cite

CITATION STYLE

APA

Yang, X., Liu, F., & Lin, G. (2023). Effective End-to-End Vision Language Pretraining With Semantic Visual Loss. IEEE Transactions on Multimedia, 25, 8408–8417. https://doi.org/10.1109/TMM.2023.3237166

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free