NLMs: Augmenting Negation in Language Models

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Negation is the fundamental component in a natural language that reverses the semantic meaning of a sentence. It plays an extremely important role across a wide range of applications, yet they are under-represented in pretrained language models (LMs), resulting often in wrong inferences. In this work, we try to improve the underlying understanding of the negation in the pre-trained LMs. To augment negation understanding, we propose a language model objective with a weighted cross-entropy loss and elastic weight consolidation regularization. For negated augmented models, we reduce the mean top 1 error rate for BERT-base to l.1%, BERT-large to 0.78%, RoBERTa-base to 3.74%, RoBERTa-large to 0.01% on the negated LAMA dataset that outperform the existing negation models. It minimizes the mean error rate by a margin of 8% and 6% for original BERT and RoBERTa models. We also provide empirical evidences that negated augmented models outperforms the classical models on original as well as negation benchmarks on natural language inference tasks.

Cite

CITATION STYLE

APA

Singh, R., Kumar, R., & Sridhar, V. (2023). NLMs: Augmenting Negation in Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 13104–13116). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.873

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free