Do I have the Knowledge to Answer? Investigating Answerability of Knowledge Base Questions

4Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

When answering natural language questions over knowledge bases, missing facts, incomplete schema and limited scope naturally lead to many questions being unanswerable. While answerability has been explored in other QA settings, it has not been studied for QA over knowledge bases (KBQA). We create GrailQAbility, a new benchmark KBQA dataset with unanswerability, by first identifying various forms of KB incompleteness that make questions unanswerable, and then systematically adapting GrailQA (a popular KBQA dataset with only answerable questions). Experimenting with three state-of-the-art KBQA models, we find that all three models suffer a drop in performance even after suitable adaptation for unanswerable questions. In addition, these often detect unanswerability for wrong reasons and find specific forms of unanswerability particularly difficult to handle. This underscores the need for further research in making KBQA systems robust to unanswerability.

Cite

CITATION STYLE

APA

Patidar, M., Faldu, P., Singh, A., Vig, L., Bhattacharya, I., & Mausam. (2023). Do I have the Knowledge to Answer? Investigating Answerability of Knowledge Base Questions. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 10341–10357). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.576

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free