Which *BERT? A survey organizing contextualized encoders

31Citations
Citations of this article
195Readers
Mendeley users who have this article in their library.

Abstract

Pretrained contextualized text encoders are now a staple of the NLP community. We present a survey on language representation learning with the aim of consolidating a series of shared lessons learned across a variety of recent efforts. While significant advancements continue at a rapid pace, we find that enough has now been discovered, in different directions, that we can begin to organize advances according to common themes. Through this organization, we highlight important considerations when interpreting recent contributions and choosing which model to use.

Cite

CITATION STYLE

APA

Xia, P., Wu, S., & van Durme, B. (2020). Which *BERT? A survey organizing contextualized encoders. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 7516–7533). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.608

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free