Zero-Shot Dependency Parsing with Worst-Case Aware Automated Curriculum Learning

10Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.

Abstract

Large multilingual pretrained language models such as mBERT and XLM-RoBERTa have been found to be surprisingly effective for cross-lingual transfer of syntactic parsing models (Wu and Dredze, 2019), but only between related languages. However, source and training languages are rarely related, when parsing truly low-resource languages. To close this gap, we adopt a method from multi-task learning, which relies on automated curriculum learning, to dynamically optimize for parsing performance on outlier languages. We show that this approach is significantly better than uniform and size-proportional sampling in the zero-shot setting.

Cite

CITATION STYLE

APA

de Lhoneux, M., Zhang, S., & Søgaard, A. (2022). Zero-Shot Dependency Parsing with Worst-Case Aware Automated Curriculum Learning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 578–587). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-short.64

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free