Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs

22Citations
Citations of this article
90Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present Graformer, a novel Transformerbased encoder-decoder architecture for graphto-text generation. With our novel graph selfattention, the encoding of a node relies on all nodes in the input graph - not only direct neighbors - facilitating the detection of global patterns. We represent the relation between two nodes as the length of the shortest path between them. Graformer learns to weight these nodenode relations differently for different attention heads, thus virtually learning differently connected views of the input graph. We evaluate Graformer on two popular graph-to-text generation benchmarks, AGENDA and WebNLG, where it achieves strong performance while using many fewer parameters than other approaches.

Cite

CITATION STYLE

APA

Schmitt, M., Ribeiro, L. F. R., Dufter, P., Gurevych, I., & Schütze, H. (2021). Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs. In TextGraphs 2021 - Graph-Based Methods for Natural Language Processing, Proceedings of the 15th Workshop - in conjunction with the 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL 2021 (pp. 10–21). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.textgraphs-1.2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free