BADGE: Speeding Up BERT Inference after Deployment via Block-wise BypAsses and DiverGence-Based Early Exiting

6Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Early exiting can reduce the average latency of pre-trained language models (PLMs) via its adaptive inference mechanism and work with other inference speed-up methods like model pruning, thus drawing much attention from the industry. In this work, we propose a novel framework, BADGE, which consists of two off-the-shelf methods for improving PLMs' early exiting. We frst address the issues of training a multi-exit PLM, the backbone model for early exiting. We propose the novel architecture of block-wise bypasses, which can alleviate the conficts in jointly training multiple intermediate classifers and thus improve the overall performances of multi-exit PLM while introducing negligible additional fops to the model. Second, we propose a novel divergence-based early exiting (DGE) mechanism, which obtains early exiting signals by comparing the predicted distributions among the current layer and the previous layers' exits. Extensive experiments on three proprietary datasets and three GLUE benchmark tasks demonstrate that our method can obtain a better speedup-performance tradeoff than the existing baseline methods.

Cite

CITATION STYLE

APA

Zhu, W., Wang, P., Ni, Y., Xie, G., & Wang, X. (2023). BADGE: Speeding Up BERT Inference after Deployment via Block-wise BypAsses and DiverGence-Based Early Exiting. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 5, pp. 500–509). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-industry.48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free