Pretrained multilingual language models have become a common tool in transferring NLP capabilities to low-resource languages, often with adaptations. In this work, we study the performance, extensibility, and interaction of two such adaptations: vocabulary augmentation and script transliteration. Our evaluations on part-of-speech tagging, universal dependency parsing, and named entity recognition in nine diverse low-resource languages uphold the viability of these approaches while raising new questions around how to optimally adapt multilingual models to low-resource settings.
CITATION STYLE
Chau, E. C., & Smith, N. A. (2021). Specializing Multilingual Language Models: An Empirical Study. In MRL 2021 - 1st Workshop on Multilingual Representation Learning, Proceedings of the Conference (pp. 51–61). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.mrl-1.5
Mendeley helps you to discover research relevant for your work.