Evidence > Intuition: Transferability Estimation for Encoder Selection

6Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

With the increase in availability of large pre-trained language models (LMs) in Natural Language Processing (NLP), it becomes critical to assess their fit for a specific target task a priori-as fine-tuning the entire space of available LMs is computationally prohibitive and unsustainable. However, encoder transferability estimation has received little to no attention in NLP. In this paper, we propose to generate quantitative evidence to predict which LM, out of a pool of models, will perform best on a target task without having to fine-tune all candidates. We provide a comprehensive study on LM ranking for 10 NLP tasks spanning the two fundamental problem types of classification and structured prediction. We adopt the state-of-the-art Logarithm of Maximum Evidence (LogME) measure from Computer Vision (CV) and find that it positively correlates with final LM performance in 94% of the setups. In the first study of its kind, we further compare transferability measures with the de facto standard of human practitioner ranking, finding that evidence from quantitative metrics is more robust than pure intuition and can help identify unexpected LM candidates.

References Powered by Scopus

SQuad: 100,000+ questions for machine comprehension of text

3973Citations
N/AReaders
Get full text

BioBERT: A pre-trained biomedical language representation model for biomedical text mining

3849Citations
N/AReaders
Get full text

TWEETEVAL: Unified benchmark and comparative evaluation for tweet classification

408Citations
N/AReaders
Get full text

Cited by Powered by Scopus

One size does not fit all in evaluating model selection scores for image classification

0Citations
N/AReaders
Get full text

Logarithm of Maximum Posterior Evidence: Advanced Model Selection for Text Classification

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Bassignana, E., Müller-Eberstein, M., Zhang, M., & Plank, B. (2022). Evidence > Intuition: Transferability Estimation for Encoder Selection. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 4218–4227). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.283

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 4

57%

Researcher 2

29%

Lecturer / Post doc 1

14%

Readers' Discipline

Tooltip

Computer Science 8

73%

Medicine and Dentistry 1

9%

Linguistics 1

9%

Neuroscience 1

9%

Save time finding and organizing research with Mendeley

Sign up for free