General purpose text embeddings from pre-trained language models for scalable inference

8Citations
Citations of this article
85Readers
Mendeley users who have this article in their library.

Abstract

The state of the art on many NLP tasks is currently achieved by large pre-trained language models, which require a considerable amount of computation. We aim to reduce the inference cost in a setting where many different predictions are made on a single piece of text. In that case, computational cost during inference can be amortized over the different predictions (tasks) using a shared text encoder. We compare approaches for training such an encoder and show that encoders pre-trained over multiple tasks generalize well to unseen tasks. We also compare ways of extracting fixed- and limited-size representations from this encoder, including pooling features extracted from multiple layers or positions. Our best approach compares favorably to knowledge distillation, achieving higher accuracy and lower computational cost once the system is handling around 7 tasks. Further, we show that through binary quantization, we can reduce the size of the extracted representations by a factor of 16 to store them for later use. The resulting method offers a compelling solution for using large-scale pre-trained models at a fraction of the computational cost when multiple tasks are performed on the same text.

Cite

CITATION STYLE

APA

Du, J., Ott, M., Li, H., Zhou, X., & Stoyanov, V. (2020). General purpose text embeddings from pre-trained language models for scalable inference. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 3018–3030). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.271

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free