A quantitative evaluation of natural language question interpretation for question answering systems

5Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Systematic benchmark evaluation plays an important role in the process of improving technologies for Question Answering (QA) systems. While currently there are a number of existing evaluation methods for natural language (NL) QA systems, most of them consider only the final answers, limiting their utility within a black box style evaluation. Herein, we propose a subdivided evaluation approach to enable finer-grained evaluation of QA systems, and present an evaluation tool which targets the NL question (NLQ) interpretation step, an initial step of a QA pipeline. The results of experiments using two public benchmark datasets suggest that we can get a deeper insight about the performance of a QA system using the proposed approach, which should provide a better guidance for improving the systems, than using black box style approaches.

Cite

CITATION STYLE

APA

Asakura, T., Kim, J. D., Yamamoto, Y., Tateisi, Y., & Takagi, T. (2018). A quantitative evaluation of natural language question interpretation for question answering systems. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11341 LNCS, pp. 215–231). Springer Verlag. https://doi.org/10.1007/978-3-030-04284-4_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free