Guiding AMR Parsing with Reverse Graph Linearization

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Meaning Representation (AMR) parsing aims to extract an abstract semantic graph from a given sentence. The sequence-to-sequence approaches, which linearize the semantic graph into a sequence of nodes and edges and generate the linearized graph directly, have achieved good performance. However, we observed that these approaches suffer from structure loss accumulation during the decoding process, leading to a much lower F1-score for nodes and edges decoded later compared to those decoded earlier. To address this issue, we propose a novel Reverse Graph Linearization (RGL) enhanced framework. RGL defines both default and reverse linearization orders of an AMR graph, where most structures at the back part of the default order appear at the front part of the reversed order and vice versa. RGL incorporates the reversed linearization to the original AMR parser through a two-pass self-distillation mechanism, which guides the model when generating the default linearizations. Our analysis shows that our proposed method significantly mitigates the problem of structure loss accumulation, outperforming the previously best AMR parsing model by 0.8 and 0.5 Smatch scores on the AMR 2.0 and AMR 3.0 dataset, respectively. The code are available at https://github.com/pkunlp-icler/AMR_reverse_graph_linearization.

Cite

CITATION STYLE

APA

Gao, B., Chen, L., Wang, P., Sui, Z., & Chang, B. (2023). Guiding AMR Parsing with Reverse Graph Linearization. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 13–26). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free