E-KAR : A Benchmark for Rationalizing Natural Language Analogical Reasoning

17Citations
Citations of this article
47Readers
Mendeley users who have this article in their library.

Abstract

The ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area. Project page of E-KAR can be found at https://ekar-leaderboard.github.io.

Cite

CITATION STYLE

APA

Chen, J., Xu, R., Fu, Z., Shi, W., Li, Z., Zhang, X., … Zhou, H. (2022). E-KAR : A Benchmark for Rationalizing Natural Language Analogical Reasoning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 3941–3955). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-acl.311

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free