Pretrained Transformers as Universal Computation Engines

99Citations
Citations of this article
357Readers
Mendeley users who have this article in their library.

Abstract

We investigate the capability of a transformer pretrained on natural language to generalize to other modalities with minimal finetuning - in particular, without finetuning of the self-attention and feedforward layers of the residual blocks. We consider such a model, which we call a Frozen Pretrained Transformer (FPT), and study finetuning it on a variety of sequence classification tasks spanning numerical computation, vision, and protein fold prediction. In contrast to prior works which investigate finetuning on the same modality as the pretraining dataset, we show that pretraining on natural language can improve performance and compute efficiency on non-language downstream tasks. Additionally, we perform an analysis of the architecture, comparing the performance of a random initialized transformer to a random LSTM. Combining the two insights, we find language-pretrained transformers can obtain strong performance on a variety of non-language tasks.

Cite

CITATION STYLE

APA

Lu, K., Grover, A., Abbeel, P., & Mordatch, I. (2022). Pretrained Transformers as Universal Computation Engines. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (Vol. 36, pp. 7628–7636). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v36i7.20729

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free