Evaluating Large Language Models in Semantic Parsing for Conversational Question Answering over Knowledge Graphs

3Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Conversational question answering systems often rely on semantic parsing to enable interactive information retrieval, which involves the generation of structured database queries from a natural language input. For information-seeking conversations about facts stored within a knowledge graph, dialogue utterances are transformed into graph queries in a process that is called knowledge-based conversational question answering. This paper evaluates the performance of large language models that have not been explicitly pre-trained on this task. Through a series of experiments on an extensive benchmark dataset, we compare models of varying sizes with different prompting techniques and identify common issue types in the generated output. Our results demonstrate that large language models are capable of generating graph queries from dialogues, with significant improvements achievable through few-shot prompting and fine-tuning techniques, especially for smaller models that exhibit lower zero-shot performance.

Cite

CITATION STYLE

APA

Schneider, P., Klettner, M., Jokinen, K., Simperl, E., & Matthes, F. (2024). Evaluating Large Language Models in Semantic Parsing for Conversational Question Answering over Knowledge Graphs. In International Conference on Agents and Artificial Intelligence (Vol. 3, pp. 807–814). Science and Technology Publications, Lda. https://doi.org/10.5220/0012394300003636

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free