LogiCoT: Logical Chain-of-Thought Instruction Tuning

22Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Generative Pre-trained Transformer 4 (GPT-4) demonstrates impressive chain-of-thought reasoning ability. Recent work on self-instruction tuning, such as Alpaca, has focused on enhancing the general proficiency of models. These instructions enable the model to achieve performance comparable to GPT-3.5 on general tasks like open-domain text generation and paraphrasing. However, they fall short of helping the model handle complex reasoning tasks. To bridge the gap, this paper presents LogiCoT, a new instruction-tuning dataset for Logical Chain-of-Thought reasoning with GPT-4. We elaborate on the process of harvesting instructions for prompting GPT-4 to generate chain-of-thought rationales. LogiCoT serves as an instruction set for teaching models of logical reasoning and elicits general reasoning skills.

Cite

CITATION STYLE

APA

Liu, H., Teng, Z., Cui, L., Zhang, C., Zhou, Q., & Zhang, Y. (2023). LogiCoT: Logical Chain-of-Thought Instruction Tuning. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 2908–2921). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.191

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free