This paper analyzes negation in eight popular corpora spanning six natural language understanding tasks. We show that these corpora have few negations compared to general-purpose English, and that the few negations in them are often unimportant. Indeed, one can often ignore negations and still make the right predictions. Additionally, experimental results show that state-of-the-art transformers trained with these corpora obtain substantially worse results with instances that contain negation, especially if the negations are important. We conclude that new corpora accounting for negation are needed to solve natural language understanding tasks when negation is present.
CITATION STYLE
Hossain, M. M., Chinnappa, D., & Blanco, E. (2022). An Analysis of Negation in Natural Language Understanding Corpora. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 716–723). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-short.81
Mendeley helps you to discover research relevant for your work.