Meta-Learning Fast Weight Language Models

4Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

Abstract

Dynamic evaluation of language models (LMs) adapts model parameters at test time using gradient information from previous tokens and substantially improves LM performance. However, it requires over 3x more compute than standard inference. We present Fast Weight Layers (FWLs), a neural component that provides the benefits of dynamic evaluation much more efficiently by expressing gradient updates as linear attention. A key improvement over dynamic evaluation is that FWLs can also be applied at training time so the model learns to make good use of gradient updates. FWLs can easily be added on top of existing transformer models, require relatively little extra compute or memory to run, and significantly improve language modeling perplexity.

Cite

CITATION STYLE

APA

Clark, K., Guu, K., Chang, M. W., Pasupat, P., Hinton, G., & Norouzi, M. (2022). Meta-Learning Fast Weight Language Models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 9751–9757). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.661

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free