Neuro-symbolic Natural Logic with Introspective Revision for Natural Language Inference

9Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

We introduce a neuro-symbolic natural logic framework based on reinforcement learning with introspective revision. The model samples and rewards specific reasoning paths through policy gradient, in which the introspective revision algorithm modifies intermediate symbolic reasoning steps to discover reward-earning operations as well as leverages external knowledge to alleviate spurious reasoning and training inefficiency. The framework is supported by properly designed local relation models to avoid input entangling, which helps ensure the interpretability of the proof paths. The proposed model has built-in interpretability and shows superior capability in monotonicity inference, systematic generalization, and interpretability, compared with previous models on the existing datasets.

Cite

CITATION STYLE

APA

Feng, Y., Yang, X., Zhu, X., & Greenspan, M. (2022). Neuro-symbolic Natural Logic with Introspective Revision for Natural Language Inference. Transactions of the Association for Computational Linguistics, 10, 240–256. https://doi.org/10.1162/tacl_a_00458

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free