Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level

22Citations
Citations of this article
85Readers
Mendeley users who have this article in their library.

Abstract

Larger language models have higher accuracy on average, but are they better on every single instance (datapoint)? Some work suggests larger models have higher out-of-distribution robustness, while other work suggests they have lower accuracy on rare subgroups. To understand these differences, we investigate these models at the level of individual instances. However, one major challenge is that individual predictions are highly sensitive to noise in the randomness in training. We develop statistically rigorous methods to address this, and after accounting for pretraining and finetuning noise, we find that our BERT-LARGE is worse than BERT-MINI on at least 1-4% of instances across MNLI, SST-2, and QQP, compared to the overall accuracy improvement of 2-10%. We also find that finetuning noise increases with model size, and that instance-level accuracy has momentum: improvement from BERT-MINI to BERT-MEDIUM correlates with improvement from BERT-MEDIUM to BERT-LARGE. Our findings suggest that instance-level predictions provide a rich source of information; we therefore recommend that researchers supplement model weights with model predictions.

References Powered by Scopus

SQuad: 100,000+ questions for machine comprehension of text

4033Citations
N/AReaders
Get full text

A large annotated corpus for learning natural language inference

2556Citations
N/AReaders
Get full text

Calibration of pre-trained transformers

163Citations
N/AReaders
Get full text

Cited by Powered by Scopus

What Does it Mean for a Language Model to Preserve Privacy?

104Citations
N/AReaders
Get full text

ILDAE: Instance-Level Difficulty Analysis of Evaluation Data

12Citations
N/AReaders
Get full text

Model Cascading: Towards Jointly Improving Efficiency and Accuracy of NLP Systems

9Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Zhong, R., Ghosh, D., Klein, D., & Steinhardt, J. (2021). Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 3813–3827). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.334

Readers over time

‘21‘22‘23‘24‘2509182736

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 27

73%

Researcher 7

19%

Lecturer / Post doc 2

5%

Professor / Associate Prof. 1

3%

Readers' Discipline

Tooltip

Computer Science 35

81%

Linguistics 4

9%

Engineering 3

7%

Neuroscience 1

2%

Save time finding and organizing research with Mendeley

Sign up for free
0