AMR Parsing with Causal Hierarchical Attention and Pointers

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Translation-based AMR parsers have recently gained popularity due to their simplicity and effectiveness. They predict linearized graphs as free texts, avoiding explicit structure modeling. However, this simplicity neglects structural locality in AMR graphs and introduces unnecessary tokens to represent coreferences. In this paper, we introduce new target forms of AMR parsing and a novel model, CHAP, which is equipped with causal hierarchical attention and the pointer mechanism, enabling the integration of structures into the Transformer decoder. We empirically explore various alternative modeling options. Experiments show that our model outperforms baseline models on four out of five benchmarks in the setting of no additional data.

Cite

CITATION STYLE

APA

Lou, C., & Tu, K. (2023). AMR Parsing with Causal Hierarchical Attention and Pointers. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 8942–8955). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.553

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free