PreCog: Exploring the Relation between Memorization and Performance in Pre-trained Language Models

6Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Large Language Models (LLMs) are impressive machines with the ability to memorize, possibly generalized learning examples. We present here a small, focused contribution to the analysis of the interplay between memorization and performance of BERT in downstream tasks. We propose PreCog, a measure for evaluating memorization from pre-training, and we analyze its correlation with the BERT's performance. Our experiments show that highly memorized examples are better classified, suggesting memorization is an essential key to success for BERT1.

Cite

CITATION STYLE

APA

Ranaldi, L., Ruzzetti, E. S., & Zanzotto, F. M. (2023). PreCog: Exploring the Relation between Memorization and Performance in Pre-trained Language Models. In International Conference Recent Advances in Natural Language Processing, RANLP (pp. 961–967). Incoma Ltd. https://doi.org/10.26615/978-954-452-092-2_103

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free