Interactive instance-based evaluation of knowledge base question answering

2Citations
Citations of this article
86Readers
Mendeley users who have this article in their library.

Abstract

Most approaches to Knowledge Base Question Answering are based on semantic parsing. In this paper, we present a tool that aids in debugging of question answering systems that construct a structured semantic representation for the input question. Previous work has largely focused on building question answering interfaces or evaluation frameworks that unify multiple data sets. The primary objective of our system is to enable interactive debugging of model predictions on individual instances (questions) and to simplify manual error analysis. Our interactive interface helps researchers to understand the shortcomings of a particular model, qualitatively analyze the complete pipeline and compare different models. A set of sit-by sessions was used to validate our interface design.

Cite

CITATION STYLE

APA

Sorokin, D., & Gurevych, I. (2018). Interactive instance-based evaluation of knowledge base question answering. In EMNLP 2018 - Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Proceedings (pp. 114–119). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d18-2020

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free