Gradually Excavating External Knowledge for Implicit Complex Question Answering

4Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently, large language models (LLMs) have gained much attention for the emergence of human-comparable capabilities and huge potential. However, for open-domain implicit question-answering problems, LLMs may not be the ultimate solution due to the reasons of: 1) uncovered or out-of-date domain knowledge, 2) one-shot generation and hence restricted comprehensiveness. To this end, this work proposes a gradual knowledge excavation framework for open-domain complex question answering, where LLMs iteratively and actively acquire external information, and then reason based on acquired historical knowledge. Specifically, during each step of the solving process, the model selects an action to execute, such as querying external knowledge or performing a single logical reasoning step, to gradually progress toward a final answer. Our method can effectively leverage plug-and-play external knowledge and dynamically adjust the strategy for solving complex questions. Evaluated on the StrategyQA dataset, our method achieves 78.17% accuracy with less than 6% parameters of its competitors, setting new SOTA for ∼10B-scale LLMs.

Cite

CITATION STYLE

APA

Liu, C., Li, X., Shang, L., Jiang, X., Liu, Q., Lam, E. Y., & Wong, N. (2023). Gradually Excavating External Knowledge for Implicit Complex Question Answering. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 14405–14417). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.961

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free