Bridging knowledge gaps in neural entailment via symbolic models

2Citations
Citations of this article
117Readers
Mendeley users who have this article in their library.

Abstract

Most textual entailment models focus on lexical gaps between the premise text and the hypothesis, but rarely on knowledge gaps. We focus on filling these knowledge gaps in the Science Entailment task, by leveraging an external structured knowledge base (KB) of science facts. Our new architecture combines standard neural entailment models with a knowledge lookup module. To facilitate this lookup, we propose a fact-level decomposition of the hypothesis, and verifying the resulting sub-facts against both the textual premise and the structured KB. Our model, NSnet, learns to aggregate predictions from these heterogeneous data formats. On the SciTail dataset, NSnet outperforms a simpler combination of the two predictions by 3% and the base entailment model by 5%.

Cite

CITATION STYLE

APA

Kang, D., Khot, T., Sabharwal, A., & Clark, P. (2018). Bridging knowledge gaps in neural entailment via symbolic models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 (pp. 4940–4945). Association for Computational Linguistics. https://doi.org/10.18653/v1/d18-1535

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free