Generating Hypothetical Events for Abductive Inference

6Citations
Citations of this article
58Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Abductive reasoning starts from some observations and aims at finding the most plausible explanation for these observations. To perform abduction, humans often make use of temporal and causal inferences, and knowledge about how some hypothetical situation can result in different outcomes. This work offers the first study of how such knowledge impacts the Abductive NLI task-which consists in choosing the more likely explanation for given observations. We train a specialized language model LMI that is tasked to generate what could happen next from a hypothetical scenario that evolves from a given event. We then propose a multi-task model MT L to solve the NLI task, which predicts a plausible explanation by a) considering different possible events emerging from candidate hypotheses-events generated by LMI- A nd b) selecting the one that is most similar to the observed outcome. We show that our MT L model improves over prior vanilla pre-trained LMs finetuned on NLI. Our manual evaluation and analysis suggest that learning about possible next events from different hypothetical scenarios supports abductive inference.

Cite

CITATION STYLE

APA

Paul, D., & Frank, A. (2021). Generating Hypothetical Events for Abductive Inference. In *SEM 2021 - 10th Conference on Lexical and Computational Semantics, Proceedings of the Conference (pp. 67–77). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.starsem-1.6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free