TINA: Textual Inference with Negation Augmentation

2Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

Transformer-based language models achieve state-of-the-art results on several natural language processing tasks. One of these is textual entailment, i.e., the task of determining whether a premise logically entails a hypothesis. However, the models perform poorly on this task when the examples contain negations. In this paper, we propose a new definition of textual entailment that captures also negation. This allows us to develop TINA (Textual Inference with Negation Augmentation), a principled technique for negated data augmentation that can be combined with the unlikelihood loss function. Our experiments with different transformer-based models show that our method can significantly improve the performance of the models on textual entailment datasets with negation - without sacrificing performance on datasets without negation.

Cite

CITATION STYLE

APA

Helwe, C., Coumes, S., Clavel, C., & Suchanek, F. (2022). TINA: Textual Inference with Negation Augmentation. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 4115–4128). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.301

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free