Pretrained contextualized text encoders are now a staple of the NLP community. We present a survey on language representation learning with the aim of consolidating a series of shared lessons learned across a variety of recent efforts. While significant advancements continue at a rapid pace, we find that enough has now been discovered, in different directions, that we can begin to organize advances according to common themes. Through this organization, we highlight important considerations when interpreting recent contributions and choosing which model to use.
CITATION STYLE
Xia, P., Wu, S., & van Durme, B. (2020). Which *BERT? A survey organizing contextualized encoders. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 7516–7533). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.608
Mendeley helps you to discover research relevant for your work.