Evidence-aware inferential text generation with vector quantised variational AutoEncoder

8Citations
Citations of this article
121Readers
Mendeley users who have this article in their library.

Abstract

Generating inferential texts about an event in different perspectives requires reasoning over different contexts that the event occurs. Existing works usually ignore the context that is not explicitly provided, resulting in a context-independent semantic representation that struggles to support the generation. To address this, we propose an approach that automatically finds evidence for an event from a large text corpus, and leverages the evidence to guide the generation of inferential texts. Our approach works in an encoder-decoder manner and is equipped with a Vector Quantised-Variational Autoencoder, where the encoder outputs representations from a distribution over discrete variables. Such discrete representations enable automatically selecting relevant evidence, which not only facilitates evidence-aware generation, but also provides a natural way to uncover rationales behind the generation. Our approach provides state-ofthe-art performance on both Event2Mind and ATOMIC datasets. More importantly, we find that with discrete representations, our model selectively uses evidence to generate different inferential texts.

Cite

CITATION STYLE

APA

Guo, D., Tang, D., Duan, N., Yin, J., Jiang, D., & Zhou, M. (2020). Evidence-aware inferential text generation with vector quantised variational AutoEncoder. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 6118–6129). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.544

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free