R4C: A benchmark for evaluating RC systems to get the right answer for the right reason

39Citations
Citations of this article
107Readers
Mendeley users who have this article in their library.

Abstract

Recent studies have revealed that reading comprehension (RC) systems learn to exploit annotation artifacts and other biases in current datasets. This prevents the community from reliably measuring the progress of RC systems. To address this issue, we introduce R4C, a new task for evaluating RC systems' internal reasoning. R4C requires giving not only answers but also derivations: explanations that justify predicted answers. We present a reliable, crowdsourced framework for scalably annotating RC datasets with derivations. We create and publicly release the R4C dataset, the first, quality-assured dataset consisting of 4.6k questions, each of which is annotated with 3 reference derivations (i.e. 13.8k derivations). Experiments show that our automatic evaluation metrics using multiple reference derivations are reliable, and that R4C assesses different skills from an existing benchmark.

Cite

CITATION STYLE

APA

Inoue, N., Stenetorp, P., & Inui, K. (2020). R4C: A benchmark for evaluating RC systems to get the right answer for the right reason. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 6740–6750). Association for Computational Linguistics (ACL). https://doi.org/10.5715/jnlp.27.665

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free