Pretraining with Artificial Language: Studying Transferable Knowledge in Language Models

18Citations
Citations of this article
52Readers
Mendeley users who have this article in their library.

Abstract

We investigate what kind of structural knowledge learned in neural network encoders is transferable to processing natural language. We design artificial languages with structural properties that mimic natural language, pretrain encoders on the data, and see how much performance the encoder exhibits on downstream tasks in natural language. Our experimental results show that pretraining with an artificial language with a nesting dependency structure provides some knowledge transferable to natural language. A follow-up probing analysis indicates that its success in the transfer is related to the amount of encoded contextual information and what is transferred is the knowledge of position-aware context dependence of language. Our results provide insights into how neural network encoders process human languages and the source of cross-lingual transferability of recent multilingual language models.

Cite

CITATION STYLE

APA

Ri, R., & Tsuruoka, Y. (2022). Pretraining with Artificial Language: Studying Transferable Knowledge in Language Models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 7302–7315). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.504

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free