Stretching Sentence-pair NLI Models to Reason over Long Documents and Clusters

23Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

Abstract

Natural Language Inference (NLI) has been extensively studied by the NLP community as a framework for estimating the semantic relation between sentence pairs. While early work identified certain biases in NLI models, recent advancements in modeling and datasets demonstrated promising performance. In this work, we further explore the direct zero-shot applicability of NLI models to real applications, beyond the sentence-pair setting they were trained on. First, we analyze the robustness of these models to longer and out-of-domain inputs. Then, we develop new aggregation methods to allow operating over full documents, reaching state-of-the-art performance on the ContractNLI dataset. Interestingly, we find NLI scores to provide strong retrieval signals, leading to more relevant evidence extractions compared to common similarity-based methods. Finally, we go further and investigate whole document clusters to identify both discrepancies and consensus among sources. In a test case, we find real inconsistencies between Wikipedia pages in different languages about the same topic.

Cite

CITATION STYLE

APA

Schuster, T., Chen, S., Buthpitiya, S., Fabrikant, A., & Metzler, D. (2022). Stretching Sentence-pair NLI Models to Reason over Long Documents and Clusters. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 394–412). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free