Abstract
Compositional reasoning across texts has been a long-standing challenge in natural language processing. With large language models like GPT-4 taking over the field, prompting techniques such as chain-of-thought (CoT) were proposed to unlock compositional, multi-step reasoning capabilities of LLMs. Despite their success, the prompts demand significant human effort to discover and validate them. Our work draws attention to the idea of transferring task-specific inductive biases from finetuned models to prompts, as a way of improving GPT-4's compositional reasoning capabilities. To leverage these inductive biases, we formulate prompt templates to ease the transfer of inductive biases. The experimental results on multi-hop question answering and numerical reasoning over text show that our proposed prompt scheme shows competitive zero-shot and few-shot performances compared to existing prompts on complicated reasoning tasks, highlighting the importance of adopting the validated biases of the previous paradigm.
Cite
CITATION STYLE
Kim, J., Hong, G., Myaeng, S. H., & Whang, J. J. (2023). FinePrompt: Unveiling the Role of Finetuned Inductive Bias on Compositional Reasoning in GPT-4. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 3763–3775). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.245
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.