MOCHA: A Multi-Task Training Approach for Coherent Text Generation from Cognitive Perspective

0Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

Teaching neural models to generate narrative coherent texts is a critical problem. Recent pretrained language models have achieved promising results, but there is still a gap between human written texts and machine-generated outputs. In this work, we propose a novel multitask training strategy for coherent text generation grounded on the cognitive theory of writing, which empowers the model to learn essential subskills needed for writing including planning and reviewing besides end-to-end generation. We extensively evaluate our model on three open-ended generation tasks including story generation, news article writing and argument generation. Experiments show that our model achieves better results on both few-shot and fully-supervised settings than strong baselines, and human evaluations confirm that our model can generate more coherent outputs.

Cite

CITATION STYLE

APA

Hu, Z., Chan, H. P., & Huang, L. (2022). MOCHA: A Multi-Task Training Approach for Coherent Text Generation from Cognitive Perspective. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 10324–10334). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.705

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free