Plot Writing From Scratch Pre-Trained Language Models

1Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Pre-trained language models (PLMs) fail to generate long-form narrative text because they do not consider global structure. As a result, the generated texts are often incohesive, repetitive, or lack content. Recent work in story generation reintroduced explicit content planning in the form of prompts, keywords, or semantic frames. Trained on large parallel corpora, these models can generate more logical event sequences and thus more contentful stories. However, these intermediate representations are often not in natural language and cannot be utilized by PLMs without fine-tuning. We propose generating story plots using off-the-shelf PLMs while maintaining the benefit of content planning to generate cohesive and contentful stories. Our proposed method, SCRATCHPLOT, first prompts a PLM to compose a content plan. Then, we generate the story’s body and ending conditioned on the content plan. Furthermore, we take a generate- and-rank approach by using additional PLMs to rank the generated (story, ending) pairs. We benchmark our method with various baselines and achieved superior results in both human and automatic evaluation 1

Cite

CITATION STYLE

APA

Jin, Y., Kadam, V., & Wanvarie, D. (2022). Plot Writing From Scratch Pre-Trained Language Models. In 15th International Natural Language Generation Conference, INLG 2022 (pp. 52–67). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.inlg-main.5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free