Graph Pre-training for AMR Parsing and Generation

66Citations
Citations of this article
72Readers
Mendeley users who have this article in their library.

Abstract

Abstract meaning representation (AMR) highlights the core semantic information of text in a graph structure. Recently, pre-trained language models (PLMs) have advanced tasks of AMR parsing and AMR-to-text generation, respectively. However, PLMs are typically pretrained on textual data, thus are sub-optimal for modeling structural knowledge. To this end, we investigate graph self-supervised training to improve the structure awareness of PLMs over AMR graphs. In particular, we introduce two graph auto-encoding strategies for graph-to-graph pre-training and four tasks to integrate text and graph information during pre-training. We further design a unified framework to bridge the gap between pre-training and fine-tuning tasks. Experiments on both AMR parsing and AMR-to-text generation show the superiority of our model. To our knowledge, we are the first to consider pre-training on semantic graphs.

Cite

CITATION STYLE

APA

Bai, X., Chen, Y., & Zhang, Y. (2022). Graph Pre-training for AMR Parsing and Generation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 6001–6015). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.415

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free