schuBERT: Optimizing elements of BERT

13Citations
Citations of this article
144Readers
Mendeley users who have this article in their library.

Abstract

Transformers (Vaswani et al., 2017) have gradually become a key component for many state-of-the-art natural language representation models. A recent Transformer based model- BERT (Devlin et al., 2018) achieved state-of-the-art results on various natural language processing tasks, including GLUE, SQuAD v1.1, and SQuAD v2.0. This model however is computationally prohibitive and has a huge number of parameters. In this work we revisit the architecture choices of BERT in efforts to obtain a lighter model. We focus on reducing the number of parameters yet our methods can be applied towards other objectives such FLOPs or latency. We show that much efficient light BERT models can be obtained by reducing algorithmically chosen correct architecture design dimensions rather than reducing the number of Transformer encoder layers. In particular, our schuBERT gives 6.6% higher average accuracy on GLUE and SQuAD datasets as compared to BERT with three encoder layers while having the same number of parameters.

Cite

CITATION STYLE

APA

Khetan, A., & Karnin, Z. (2020). schuBERT: Optimizing elements of BERT. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 2807–2818). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.250

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free