XDBERT: Distilling Visual Information to BERT from Cross-Modal Systems to Improve Language Understanding

0Citations
Citations of this article
54Readers
Mendeley users who have this article in their library.

Abstract

Transformer-based models are widely used in natural language understanding (NLU) tasks, and multimodal transformers have been effective in visual-language tasks. This study explores distilling visual information from pretrained multimodal transformers to pretrained language encoders. Our framework is inspired by cross-modal encoders' success in visual-language tasks while we alter the learning objective to cater to the language-heavy characteristics of NLU. After training with a small number of extra adapting steps and fine-tuned, the proposed XDBERT (cross-modal distilled BERT) outperforms pretrained-BERT in general language understanding evaluation (GLUE), situations with adversarial generations (SWAG) benchmarks, and readability benchmarks. We analyze the performance of XDBERT on GLUE to show that the improvement is likely visually grounded.

Cite

CITATION STYLE

APA

Hsu, C. J., Lee, H. Y., & Tsao, Y. (2022). XDBERT: Distilling Visual Information to BERT from Cross-Modal Systems to Improve Language Understanding. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 479–489). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-short.52

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free