Scripts - prototypical event sequences describing everyday activities - have been shown to help understand narratives by providing expectations, resolving ambiguity, and filling in unstated information. However, to date they have proved hard to author or extract from text. In this work, we demonstrate for the first time that pre-trained neural language models can be finetuned to generate high-quality scripts, at varying levels of granularity, for a wide range of everyday scenarios (e.g., bake a cake). To do this, we collect a large (6.4k) crowdsourced partially ordered scripts (named proScript), that is substantially larger than prior datasets, and develop models that generate scripts by combining language generation and graph structure prediction. We define two complementary tasks: (i) edge prediction: given a scenario and unordered events, organize the events into a valid (possibly partialorder) script, and (ii) script generation: given only a scenario, generate events and organize them into a (possibly partial-order) script. Our experiments show that our models perform well (e.g., F1=75.7 on task (i)), illustrating a new approach to overcoming previous barriers to script collection. We also show that there is still significant room for improvement toward human level performance. Together, our tasks, dataset, and models offer a new research direction for learning script knowledge.
CITATION STYLE
Sakaguchi, K., Bhagavatula, C., Le Bras, R., Tandon, N., Clark, P., & Choi, Y. (2021). ProScript: Partially Ordered Scripts Generation. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 2138–2149). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.184
Mendeley helps you to discover research relevant for your work.