Abstract
The problem of AMR-to-text generation is to recover a text representing the same meaning as an input AMR graph. The current state-of-the-art method uses a sequence-to-sequence model, leveraging LSTM for encoding a linearized AMR structure. Although it is able to model non-local semantic information, a sequence LSTM can lose information from the AMR graph structure, and thus faces challenges with large graphs, which result in long sequences. We introduce a neural graph-to-sequence model, using a novel LSTM structure for directly encoding graph-level semantics. On a standard benchmark, our model shows superior results to existing methods in the literature.
Cite
CITATION STYLE
Song, L., Zhang, Y., Wang, Z., & Gildea, D. (2018). A graph-to-sequence model for AMR-to-text generation. In ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) (Vol. 1, pp. 1616–1626). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p18-1150
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.