Code-Style In-Context Learning for Knowledge-Based Question Answering

1Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

Current methods for Knowledge-Based Question Answering (KBQA) usually rely on complex training techniques and model frameworks, leading to many limitations in practical applications. Recently, the emergence of In-Context Learning (ICL) capabilities in Large Language Models (LLMs) provides a simple and training-free semantic parsing paradigm for KBQA: Given a small number of questions and their labeled logical forms as demo examples, LLMs can understand the task intent and generate the logic form for a new question. However, current powerful LLMs have little exposure to logic forms during pre-training, resulting in a high format error rate. To solve this problem, we propose a code-style in-context learning method for KBQA, which converts the generation process of unfamiliar logical form into the more familiar code generation process for LLMs. Experimental results on three mainstream datasets show that our method dramatically mitigated the formatting error problem in generating logic forms while realizing a new SOTA on WebQSP, GrailQA, and GraphQ under the few-shot setting. The code and supplementary files are released at https://github.com/Arthurizijar/KB-Coder.

Cite

CITATION STYLE

APA

Nie, Z., Zhang, R., Wang, Z., & Liu, X. (2024). Code-Style In-Context Learning for Knowledge-Based Question Answering. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 18833–18841). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i17.29848

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free