Explainable Inference Over Grounding-Abstract Chains for Science Questions

12Citations
Citations of this article
47Readers
Mendeley users who have this article in their library.

Abstract

We propose an explainable inference approach for science questions by reasoning on grounding and abstract inference chains. This paper frames question answering as a natural language abductive reasoning problem, constructing plausible explanations for each candidate answer and then selecting the candidate with the best explanation as the final answer. Our method, ExplanationLP, elicits explanations by constructing a weighted graph of relevant facts for each candidate answer and employs a linear programming formalism designed to select the optimal subgraph of explanatory facts. The graphs' weighting function is composed of a set of parameters targeting relevance, cohesion and diversity, which we fine-tune for answer selection via Bayesian Optimisation. We carry out our experiments on the WorldTree and ARC-Challenge datasets to empirically demonstrate the following contributions: (1) ExplanationLP obtains strong performance when compared to transformer-based and multi-hop approaches despite having a significantly lower number of parameters; (2) We show that our model is able to generate plausible explanations for answer prediction; (3) Our model demonstrates better robustness towards semantic drift when compared to transformer-based and multi-hop approaches.

Cite

CITATION STYLE

APA

Thayaparan, M., Valentino, M., & Freitas, A. (2021). Explainable Inference Over Grounding-Abstract Chains for Science Questions. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 1–12). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free