Consistent Accelerated Inference via Confident Adaptive Transformers

35Citations
Citations of this article
81Readers
Mendeley users who have this article in their library.

Abstract

We develop a novel approach for confidently accelerating inference in the large and expensive multilayer Transformers that are now ubiquitous in natural language processing (NLP). Amortized or approximate computational methods increase efficiency, but can come with unpredictable performance costs. In this work, we present CATs-Confident Adaptive Transformers-in which we simultaneously increase computational efficiency, while guaranteeing a specifiable degree of consistency with the original model with high confidence. Our method trains additional prediction heads on top of intermediate layers, and dynamically decides when to stop allocating computational effort to each input using a meta consistency classifier. To calibrate our early prediction stopping rule, we formulate a unique extension of conformal prediction. We demonstrate the effectiveness of this approach on four classification and regression tasks.

Cite

CITATION STYLE

APA

Schuster, T., Fisch, A., Jaakkola, T., & Barzilay, R. (2021). Consistent Accelerated Inference via Confident Adaptive Transformers. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 4962–4979). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.406

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free