Improving negation detection with negation-focused pre-training

10Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.

Abstract

Negation is a common linguistic feature that is crucial in many language understanding tasks, yet it remains a hard problem due to diversity in its expression in different types of text. Recent work has shown that state-of-the-art NLP models underperform on samples containing negation in various tasks, and that negation detection models do not transfer well across domains. We propose a new negation-focused pre-training strategy, involving targeted data augmentation and negation masking, to better incorporate negation information into language models. Extensive experiments on common benchmarks show that our proposed approach improves negation detection performance and generalizability over the strong baseline NegBERT (Khandelwal and Sawant, 2020).

Cite

CITATION STYLE

APA

Truong, H. T., Baldwin, T., Cohn, T., & Verspoor, K. (2022). Improving negation detection with negation-focused pre-training. In NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 4188–4193). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.naacl-main.309

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free