It is a Bird Therefore it is a Robin: On BERT's Internal Consistency Between Hypernym Knowledge and Logical Words

3Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The lexical knowledge of NLP systems should be tested (i) for their internal consistency (avoiding groundedness issues) and (ii) both for content words and logical words. In this paper we propose a new method to test the understanding of the hypernymy relationship by measuring its antisymmetry according to the models. Previous studies often rely only on the direct question (e.g., A robin is a...), where we argue a correct answer could only rely on col-locational cues, rather than hierarchical cues. We show how to control for this, and how it is important. We develop a method to ask similar questions about logical words that encode an entailment-like relation (e.g., because or therefore). Our results show important weaknesses of BERT-like models on these semantic tasks.

Cite

CITATION STYLE

APA

Guerin, N., & Chemla, E. (2023). It is a Bird Therefore it is a Robin: On BERT’s Internal Consistency Between Hypernym Knowledge and Logical Words. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 8807–8817). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.560

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free