Specializing Multilingual Language Models: An Empirical Study

18Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.

Abstract

Pretrained multilingual language models have become a common tool in transferring NLP capabilities to low-resource languages, often with adaptations. In this work, we study the performance, extensibility, and interaction of two such adaptations: vocabulary augmentation and script transliteration. Our evaluations on part-of-speech tagging, universal dependency parsing, and named entity recognition in nine diverse low-resource languages uphold the viability of these approaches while raising new questions around how to optimally adapt multilingual models to low-resource settings.

Cite

CITATION STYLE

APA

Chau, E. C., & Smith, N. A. (2021). Specializing Multilingual Language Models: An Empirical Study. In MRL 2021 - 1st Workshop on Multilingual Representation Learning, Proceedings of the Conference (pp. 51–61). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.mrl-1.5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free