On the Transferability of Pre-trained Language Models: A Study from Artificial Datasets

13Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

Abstract

Pre-training language models (LMs) on large-scale unlabeled text data makes the model much easier to achieve exceptional downstream performance than their counterparts directly trained on the downstream tasks. In this work, we study what specific traits in the pre-training data, other than the semantics, make a pre-trained LM superior to their counterparts trained from scratch on downstream tasks. We propose to use artificially constructed datasets as the pre-training data to exclude the effect of semantics, and further control what characteristics the pre-training corpora have. By fine-tuning the pre-trained models on GLUE benchmark, we can learn how beneficial it is to transfer the knowledge from the model trained on the dataset possessing that specific trait. We define and discuss three different characteristics in the artificial dataset: 1) matching the token's uni-gram or bi-gram distribution between pre-training and downstream fine-tuning, 2) the presence of the explicit dependencies among the tokens in a sequence, 3) the length of the implicit dependencies among the tokens in a sequence. Our experiments show that the explicit dependencies in the sequences of the pre-training data are critical to the downstream performance. Our results also reveal that models achieve better downstream performance when pre-trained on a dataset with a longer range of implicit dependencies. Based on our analysis, we find that models pretrained with artificial datasets are prone to learn spurious correlation in downstream tasks. Our work reveals that even if the LMs are not pre-trained on natural language, they still gain transferability on certain human language downstream tasks once the LMs learn to model the token dependencies in the sequences. This result helps us understand the exceptional transferability of pre-trained LMs.

References Powered by Scopus

SQuad: 100,000+ questions for machine comprehension of text

3973Citations
N/AReaders
Get full text

Learning music helps you read: Using transfer to study linguistic structure in language models

35Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Pretraining with Artificial Language: Studying Transferable Knowledge in Language Models

18Citations
N/AReaders
Get full text

Injecting structural hints: Using language models to study inductive biases in language learning

8Citations
N/AReaders
Get full text

SPEC: Summary Preference Decomposition for Low-Resource Abstractive Summarization

7Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Chiang, C. H., & Lee, H. Y. (2022). On the Transferability of Pre-trained Language Models: A Study from Artificial Datasets. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (Vol. 36, pp. 10518–10525). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v36i10.21295

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 9

69%

Researcher 3

23%

Lecturer / Post doc 1

8%

Readers' Discipline

Tooltip

Computer Science 13

93%

Social Sciences 1

7%

Save time finding and organizing research with Mendeley

Sign up for free