Generating text from structured inputs, such as meaning representations or RDF triples, has often involved the use of specialized graph-encoding neural networks. However, recent applications of pretrained transformers to linearizations of graph inputs have yielded state-of-the-art generation results on graph-to-text tasks. Here, we explore the ability of these linearized models to encode local graph structures, in particular their invariance to the graph linearization strategy and their ability to reconstruct corrupted inputs. Our findings motivate solutions to enrich the quality of models' implicit graph encodings via scaffolding. Namely, we use graph-denoising objectives implemented in a multi-task text-to-text framework. We find that these denoising scaffolds lead to substantial improvements in downstream generation in low-resource settings.
CITATION STYLE
Hoyle, A., Marasovic, A., & Smith, N. A. (2021). Promoting Graph Awareness in Linearized Graph-to-Text Generation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 944–956). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.82
Mendeley helps you to discover research relevant for your work.