Data Similarity is Not Enough to Explain Language Model Performance

5Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Large language models achieve high performance on many but not all downstream tasks. The interaction between pretraining data and task data is commonly assumed to determine this variance: a task with data that is more similar to a model's pretraining data is assumed to be easier for that model. We test whether distributional and example-specific similarity measures (embedding-, token- and model-based) correlate with language model performance through a large-scale comparison of the Pile and C4 pretraining datasets with downstream benchmarks. Similarity correlates with performance for multilingual datasets, but in other benchmarks, we surprisingly find that similarity metrics are not correlated with accuracy or even each other. This suggests that the relationship between pretraining data and downstream tasks is more complex than often assumed.

Cite

CITATION STYLE

APA

Yauney, G., Reif, E., & Mimno, D. (2023). Data Similarity is Not Enough to Explain Language Model Performance. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 11295–11304). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.695

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free