Thinking like a skeptic: Defeasible inference in natural language

50Citations
Citations of this article
86Readers
Mendeley users who have this article in their library.

Abstract

Defeasible inference is a mode of reasoning in which an inference (X is a bird, therefore X flies) may be weakened or overturned in light of new evidence (X is a penguin). Though long recognized in classical AI and philosophy, defeasible inference has not been extensively studied in the context of contemporary data-driven research on natural language inference and commonsense reasoning. We introduce Defeasible NLI (abbreviated δ-NLI), a dataset for defeasible inference in natural language. δ-NLI contains extensions to three existing inference datasets covering diverse modes of reasoning: common sense, natural language inference, and social norms. From δ-NLI, we develop both a classification and generation task for defeasible inference, and demonstrate that the generation task is much more challenging. Despite lagging human performance, however, generative models trained on this data are capable of writing sentences that weaken or strengthen a specified inference up to 68% of the time.

Cite

CITATION STYLE

APA

Rudinger, R., Shwartz, V., Hwang, J. D., Bhagavatula, C., Forbes, M., Le Bras, R., … Choi, Y. (2020). Thinking like a skeptic: Defeasible inference in natural language. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 4661–4675). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.418

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free