Large multilingual language models typically share their parameters across all languages, which enables cross-lingual task transfer, but learning can also be hindered when training updates from different languages are in conflict. In this article, we propose novel methods for using language-specific subnetworks, which control cross-lingual parameter sharing, to reduce conflicts and increase positive transfer during fine-tuning. We introduce dynamic subnetworks, which are jointly updated with the model, and we combine our methods with meta-learning, an established, but complementary, technique for improving cross-lingual transfer. Finally, we provide extensive analyses of how each of our methods affects the models.
CITATION STYLE
Choenni, R., Garrette, D., & Shutova, E. (2023). Cross-Lingual Transfer with Language-Specific Subnetworks for Low-Resource Dependency Parsing. Computational Linguistics, 49(3), 613–641. https://doi.org/10.1162/coli_a_00482
Mendeley helps you to discover research relevant for your work.