Instruction Induction: From Few Examples to Natural Language Task Descriptions

18Citations
Citations of this article
83Readers
Mendeley users who have this article in their library.

Abstract

Large language models are able to perform a task by conditioning on a few input-output demonstrations - a paradigm known as in-context learning. We show that language models can explicitly infer an underlying task from a few demonstrations by prompting them to generate a natural language instruction that fits the examples. To explore this ability, we introduce the instruction induction challenge, compile a dataset consisting of 24 tasks, and define a novel evaluation metric based on executing the generated instruction. We discover that, to a large extent, the ability to generate instructions does indeed emerge when using a model that is both large enough and aligned to follow instructions; InstructGPT achieves 65.7% of human performance in our execution-based metric, while the original GPT-3 model reaches only 9.8% of human performance. This surprising result suggests that instruction induction might be a viable learning paradigm in and of itself, where instead of fitting a set of latent continuous parameters to the data, one searches for the best description in the natural language hypothesis space.

Cite

CITATION STYLE

APA

Honovich, O., Shaham, U., Bowman, S. R., & Levy, O. (2023). Instruction Induction: From Few Examples to Natural Language Task Descriptions. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 1935–1952). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.108

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free