Towards Reasoning in Large Language Models: A Survey

286Citations
Citations of this article
436Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Reasoning is a fundamental aspect of human intelligence that plays a crucial role in activities such as problem solving, decision making, and critical thinking. In recent years, large language models (LLMs) have made significant progress in natural language processing, and there is observation that these models may exhibit reasoning abilities when they are sufficiently large. However, it is not yet clear to what extent LLMs are capable of reasoning. This paper provides a comprehensive overview of the current state of knowledge on reasoning in LLMs, including techniques for improving and eliciting reasoning in these models, methods and benchmarks for evaluating reasoning abilities, findings and implications of previous research in this field, and suggestions on future directions. Our aim is to provide a detailed and up-to-date review of this topic and stimulate meaningful discussion and future work.

Cite

CITATION STYLE

APA

Huang, J., & Chen-Chuan Chang, K. (2023). Towards Reasoning in Large Language Models: A Survey. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 1049–1065). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.67

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free