The present work constitutes an attempt to investigate the relational structures learnt by mBERT, a multilingual transformer-based network, with respect to different cross-linguistic regularities proposed in the fields of theoretical and quantitative linguistics. We pursued this objective by relying on a zero-shot transfer experiment, evaluating the model's ability to generalize its native task to artificial languages that could either respect or violate some proposed language universal, and comparing its performance to the output of BERT, a monolingual model with an identical configuration. We created four artificial corpora through a Probabilistic Context-Free Grammar by manipulating the distribution of tokens and the structure of their dependency relations. We showed that while both models were favoured by a Zipfian distribution of the tokens and by the presence of head-dependency type structures, the multilingual transformer network exhibited a stronger reliance on hierarchical cues compared to its monolingual counterpart.
CITATION STYLE
de Varda, A. G., & Zamparelli, R. (2022). Multilingualism Encourages Recursion: a Transfer Study with mBERT. In SIGTYP 2022 - 4th Workshop on Computational Typology and Multilingual NLP, Proceedings of the Workshop (pp. 1–10). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.sigtyp-1.1
Mendeley helps you to discover research relevant for your work.