COMPLEQA: Benchmarking the Impacts of Knowledge Graph Completion Methods on Question Answering

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

How much success in Knowledge Graph Completion (KGC) would translate into the performance enhancement in downstream tasks is an important question that has not been studied in depth. In this paper, we introduce a novel benchmark, namely COMPLEQA, to comprehensively assess the influence of representative KGC methods on Knowledge Graph Question Answering (KGQA), one of the most important downstream applications. This benchmark includes a knowledge graph with 3 million triplets across 5 distinct domains, coupled with over 5,000 question-answering pairs and a completion dataset that is well-aligned with these questions. Our evaluation of four well-known KGC methods in combination with two state-of-the-art KGQA systems shows that effective KGC can significantly mitigate the impact of knowledge graph incompleteness on question-answering performance. Surprisingly, we also find that the best-performing KGC method(s) does not necessarily lead to the best QA results, underscoring the need to consider downstream applications when doing KGC.

Cite

CITATION STYLE

APA

Yu, D., Xiong, C., Gu, Y., & Yang, Y. (2023). COMPLEQA: Benchmarking the Impacts of Knowledge Graph Completion Methods on Question Answering. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 12748–12755). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.849

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free