Abstract
AMR-to-text generation is a challenging task of generating texts from graph-based semantic representations. Recent studies formalize this task a graph-to-sequence learning problem and use various graph neural networks to model graph structure. In this paper, we propose a novel approach that generates texts from AMR graphs while reconstructing the input graph structures. Our model employs graph attention mechanism to aggregate information for encoding the inputs. Moreover, better node representations are learned by optimizing two simple but effective auxiliary reconstruction objectives: link prediction objective which requires predicting the semantic relationship between nodes, and distance prediction objective which requires predicting the distance between nodes. Experimental results on two benchmark datasets show that our proposed model improves considerably over strong baselines and achieves new state-of-the-art.
Cite
CITATION STYLE
Wang, T., Wan, X., & Yao, S. (2020). Better AMR-to-text generation with graph structure reconstruction. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 3919–3925). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/542
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.