Large language models (LLMs) have demonstrated remarkable performance across a range of natural language processing (NLP) tasks. However, they encounter significant challenges in automated reasoning, especially in multi-step reasoning scenarios. In order to solve complex reasoning problems, LLMs need to perform faithful multi-step reasoning based on a given set of facts and rules. A lot of work has focused on guiding LLMs to think logically by generating reasoning paths, but ignores the relationship among available facts. In this paper, we introduce MindMap, a straightforward yet powerful approach for constructing evidence chains to support reasoning in LLMs. An evidence chain refers to a set of facts that are associated with the same subject. In this way, we can organize related facts together to avoid missing relevant information. MindMap can seamlessly integrate with existing reasoning frameworks, such as Chain-of-Thought (CoT) and Selection-Inference (SI), by enabling the model to generate and select relevant evidence chains from independent facts. The experimental results on the bAbI and ProofWriterOWA datasets demonstrate the effectiveness of MindMap. Our approach can significantly enhance the performance of CoT and SI, particularly in multi-step reasoning tasks.
CITATION STYLE
Wu, Y., Han, X., Song, W., Cheng, M., & Li, F. (2024). MindMap: Constructing Evidence Chains for Multi-Step Reasoning in Large Language Models. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 19270–19278). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i17.29896
Mendeley helps you to discover research relevant for your work.