STORYWARS: A Dataset and Instruction Tuning Baselines for Collaborative Story Understanding and Generation

1Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

Collaborative stories, which are texts created through the collaborative efforts of multiple authors with different writing styles and intentions, pose unique challenges for NLP models. Understanding and generating such stories remains an underexplored area due to the lack of open-domain corpora. To address this, we introduce STORYWARS, a new dataset of over 40,000 collaborative stories written by 9,400 different authors from an online platform. We design 12 task types, comprising 7 understanding and 5 generation task types, on STORYWARS, deriving 101 diverse story-related tasks in total as a multi-task benchmark covering all fully-supervised, few-shot, and zero-shot scenarios. Furthermore, we present our instruction-tuned model, INSTRUCTSTORY, for the story tasks showing that instruction tuning, in addition to achieving superior results in zero-shot and few-shot scenarios, can also obtain the best performance on the fully-supervised tasks in STORYWARS, establishing strong multi-task benchmark performances on STORYWARS.

Cite

CITATION STYLE

APA

Du, Y., & Chilton, L. (2023). STORYWARS: A Dataset and Instruction Tuning Baselines for Collaborative Story Understanding and Generation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 3044–3062). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.171

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free