Distilling Script Knowledge from Large Language Models for Constrained Language Planning

21Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In everyday life, humans often plan their actions by following step-by-step instructions in the form of goal-oriented scripts. Previous work has exploited language models (LMs) to plan for abstract goals of stereotypical activities (e.g., “make a cake”), but leaves more specific goals with multi-facet constraints understudied (e.g., “make a cake for diabetics”). In this paper, we define the task of constrained language planning for the first time. We propose an over-generate-then-filter approach to improve large language models (LLMs) on this task, and use it to distill a novel constrained language planning dataset, CoScript, which consists of 55,000 scripts. Empirical results demonstrate that our method significantly improves the constrained language planning ability of LLMs, especially on constraint faithfulness. Furthermore, CoScript is demonstrated to be quite effective in endowing smaller LMs with constrained language planning ability.

Cite

CITATION STYLE

APA

Yuan, S., Chen, J., Fu, Z., Ge, X., Shah, S., Jankowski, C. R., … Yang, D. (2023). Distilling Script Knowledge from Large Language Models for Constrained Language Planning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 4303–4325). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.236

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free