APGN: Adversarial and Parameter Generation Networks for Multi-Source Cross-Domain Dependency Parsing

3Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.

Abstract

Thanks to the strong representation learning capability of deep learning, especially pretraining techniques with language model loss, dependency parsing has achieved great performance boost in the in-domain scenario with abundant labeled training data for target domains. However, the parsing community has to face the more realistic setting where the parsing performance drops drastically when labeled data only exists for several fixed outdomains. In this work, we propose a novel model for multi-source cross-domain dependency parsing. The model consists of two components, i.e., a parameter generation network for distinguishing domain-specific features, and an adversarial network for learning domain-invariant representations. Experiments on a recently released dataset for multidomain dependency parsing show that our model can consistently improve cross-domain parsing performance by about 2 points in averaged labeled attachment accuracy (LAS) over strong BERT-enhanced baselines. Detailed analysis is conducted to gain more insights on contributions of the two components.

Cite

CITATION STYLE

APA

Li, Y., Zhang, M., Li, Z., Zhang, M., Wang, Z., Huai, B., & Yuan, N. J. (2021). APGN: Adversarial and Parameter Generation Networks for Multi-Source Cross-Domain Dependency Parsing. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 1724–1733). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.149

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free