Is neural language acquisition similar to natural? A chronological probing study

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

The probing methodology allows one to obtain a partial representation of linguistic phenomena stored in the inner layers of the neural network, using external classifiers and statistical analysis. Pre-trained transformer-based language models are widely used both for natural language understanding (NLU) and natural language generation (NLG) tasks making them most commonly used for downstream applications. However, little analysis was carried out, whether the models were pre-trained enough or contained knowledge correlated with linguistic theory. We are presenting the chronological probing study of transformer English models such as MultiBERT and T5. We sequentially compare the information about the language learned by the models in the process of training on corpora. The results show that 1) linguistic information is acquired in the early stages of training 2) both language models demonstrate capabilities to capture various features from various levels of language, including morphology, syntax, and even discourse, while they also can inconsistently fail on tasks that are perceived as easy. We also introduce the open-source framework for chronological probing research, compatible with other transformer-based models. https://github.com/EkaterinaVoloshina/chronological_probing.

Cite

CITATION STYLE

APA

Voloshina, E., Serikov, O., & Shavrina, T. (2022). Is neural language acquisition similar to natural? A chronological probing study. In Komp’juternaja Lingvistika i Intellektual’nye Tehnologii (Vol. 2022, pp. 550–563). ABBYY PRODUCTION LLC. https://doi.org/10.28995/2075-7182-2022-21-550-563

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free