Enhancing Text Generation via Parse Tree Embedding

2Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Natural language generation (NLG) is a core component of machine translation, dialogue systems, speech recognition, summarization, and so forth. The existing text generation methods tend to be based on recurrent neural language models (NLMs), which generate sentences from encoding vector. However, most of these models lack explicit structured representation for text generation. In this work, we introduce a new generative model for NLG, called Tree-VAE. First it samples a sentence from the training corpus and then generates a new sentence based on the corresponding parse tree embedding vector. Tree-LSTM is used in collaboration with the Stanford Parser to retrieve sentence construction data, which is then used to train a conditional discretization autoencoder generator based on the embeddings of sentence patterns. The proposed model is extensively evaluated on three different datasets. The experimental results proved that the proposed model can generate substantially more diverse and coherent text than existing baseline methods.

Cite

CITATION STYLE

APA

Duan, D., Zhang, Q., Han, Z., & Xiong, H. (2022). Enhancing Text Generation via Parse Tree Embedding. Computational Intelligence and Neuroscience, 2022. https://doi.org/10.1155/2022/4096383

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free