Developmental Negation Processing in Transformer Language Models

2Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

Abstract

Reasoning using negation is known to be difficult for transformer-based language models. While previous studies have used the tools of psycholinguistics to probe a transformer's ability to reason over negation, none have focused on the types of negation studied in developmental psychology. We explore how well transformers can process such categories of negation, by framing the problem as a natural language inference (NLI) task. We curate a set of diagnostic questions for our target categories from popular NLI datasets and evaluate how well a suite of models reason over them. We find that models perform consistently better only on certain categories, suggesting clear distinctions in how they are processed.

Cite

CITATION STYLE

APA

Laverghetta, A., & Licato, J. (2022). Developmental Negation Processing in Transformer Language Models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 545–551). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-short.60

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free