Exploring Structural Encoding for Data-to-Text Generation

3Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

Abstract

Due to efficient end-to-end training and fluency in generated texts, several encoder-decoder framework-based models are recently proposed for data-to-text generations. Appropriate encoding of input data is a crucial part of such encoder-decoder models. However, only a few research works have concentrated on proper encoding methods. This paper presents a novel encoder-decoder based data-to-text generation model where the proposed encoder carefully encodes input data according to underlying structure of the data. The effectiveness of the proposed encoder is evaluated both extrinsically and intrinsically by shuffling input data without changing meaning of that data. For selecting appropriate content information in encoded data from encoder, the proposed model incorporates attention gates in the decoder. With extensive experiments on WikiBio and E2E dataset, we show that our model outperforms the state-of-the models and several standard baseline systems. Analysis of the model through component ablation tests and human evaluation endorse the proposed model as a well-grounded system.

Cite

CITATION STYLE

APA

Mahapatra, J., & Garain, U. (2021). Exploring Structural Encoding for Data-to-Text Generation. In INLG 2021 - 14th International Conference on Natural Language Generation, Proceedings (pp. 404–415). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.inlg-1.44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free