Natural Language Deduction with Incomplete Information

10Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

Abstract

A growing body of work studies how to answer a question or verify a claim by generating a natural language “proof”: a chain of deductive inferences yielding the answer based on a set of premises. However, these methods can only make sound deductions when they follow from evidence that is given. We propose a new system that can handle the underspecified setting where not all premises are stated at the outset; that is, additional assumptions need to be materialized to prove a claim. By using a natural language generation model to abductively infer a premise given another premise and a conclusion, we can impute missing pieces of evidence needed for the conclusion to be true. Our system searches over two fringes in a bidirectional fashion, interleaving deductive (forward-chaining) and abductive (backward-chaining) generation steps. We sample multiple possible outputs for each step to achieve coverage of the search space, at the same time ensuring correctness by filtering low-quality generations with a round-trip validation procedure. Results on a modified version of the EntailmentBank dataset and a new dataset called Everyday Norms: Why Not? show that abductive generation with validation can recover premises across in- and out-of-domain settings.

Cite

CITATION STYLE

APA

Sprague, Z., Bostrom, K., Chaudhuri, S., & Durrett, G. (2022). Natural Language Deduction with Incomplete Information. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 8230–8258). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.564

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free