Prediction or Comparison: Toward Interpretable Qualitative Reasoning

0Citations
Citations of this article
53Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Qualitative relationships illustrate how changing one property (e.g., moving velocity) affects another (e.g., kinetic energy) and constitutes a considerable portion of textual knowledge. Current approaches use either semantic parsers to transform natural language inputs into logical expressions or a “black-box” model to solve them in one step. The former has a limited application range, while the latter lacks interpretability. In this work, we categorize qualitative reasoning tasks into two types: prediction and comparison. In particular, we adopt neural network modules trained in an end-to-end manner to simulate the two reasoning processes. Experiments on two qualitative reasoning question answering datasets, QuaRTz and QuaRel, show our methods' effectiveness and generalization capability, and the intermediate outputs provided by the modules make the reasoning process interpretable.

Cite

CITATION STYLE

APA

Ren, M., Huang, H., & Gao, Y. (2021). Prediction or Comparison: Toward Interpretable Qualitative Reasoning. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 664–675). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.59

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free