Cross-Lingual Transfer with Language-Specific Subnetworks for Low-Resource Dependency Parsing

3Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Large multilingual language models typically share their parameters across all languages, which enables cross-lingual task transfer, but learning can also be hindered when training updates from different languages are in conflict. In this article, we propose novel methods for using language-specific subnetworks, which control cross-lingual parameter sharing, to reduce conflicts and increase positive transfer during fine-tuning. We introduce dynamic subnetworks, which are jointly updated with the model, and we combine our methods with meta-learning, an established, but complementary, technique for improving cross-lingual transfer. Finally, we provide extensive analyses of how each of our methods affects the models.

Cite

CITATION STYLE

APA

Choenni, R., Garrette, D., & Shutova, E. (2023). Cross-Lingual Transfer with Language-Specific Subnetworks for Low-Resource Dependency Parsing. Computational Linguistics, 49(3), 613–641. https://doi.org/10.1162/coli_a_00482

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free