Variational Graph Autoencoding as Cheap Supervision for AMR Coreference Resolution

9Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.

Abstract

Coreference resolution over semantic graphs like AMRs aims to group the graph nodes that represent the same entity. This is a crucial step for making document-level formal semantic representations. With annotated data on AMR coreference resolution, deep learning approaches have recently shown great potential for this task, yet they are usually data hungry and annotating data is costly. We propose a general pretraining method using variational graph autoencoder (VGAE) for AMR coreference resolution, which can leverage any general AMR corpus and even automatically parsed AMR data. Experiments on benchmarks show that the pretraining approach achieves performance gains of up to 6% absolute F1 points. Moreover, our model significantly improves on the previous state-of-the-art model by up to 11% F1 points.

Cite

CITATION STYLE

APA

Li, I., Song, L., Xu, K., & Yu, D. (2022). Variational Graph Autoencoding as Cheap Supervision for AMR Coreference Resolution. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 2790–2800). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.199

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free