Semantic Parsing for Knowledge Graph Question Answering with Large Language Models

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This thesis explores the topic of Knowledge Graph Question Answering with a special emphasis on semantic parsing approaches, incorporating pre-trained text-to-text language models. We use the text generation ability of these models to convert natural language questions to logical forms. We test whether correct logical forms are being generated, and if not, how to mitigate the failure cases. As a second step, we try to make the same models generate additional information to aid the process of grounding of the logical forms to entities, relations and literals in the Knowledge Graph. In experiments conducted so far, we see encouraging results on both generation of base logical forms, and grounding them to the KG elements. At the same time, we discover failure cases prompting directions in future work (The author considers himself a ‘middle-stage’ Ph.D. candidate).

Cite

CITATION STYLE

APA

Banerjee, D. (2023). Semantic Parsing for Knowledge Graph Question Answering with Large Language Models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13998 LNCS, pp. 234–243). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-43458-7_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free