Does he wink or does he nod? A challenging benchmark for evaluating word understanding of language models

2Citations
Citations of this article
64Readers
Mendeley users who have this article in their library.

Abstract

Recent progress in pretraining language models on large corpora has resulted in large performance gains on many NLP tasks. These large models acquire linguistic knowledge during pretraining, which helps to improve performance on downstream tasks via fine-tuning. To assess what kind of knowledge is acquired, language models are commonly probed by querying them with 'fill in the blank' style cloze questions. Existing probing datasets mainly focus on knowledge about relations between words and entities. We introduce WDLMPro (Word Definition Language Model Probing) to evaluate word understanding directly using dictionary definitions of words. In our experiments, three popular pretrained language models struggle to match words and their definitions. This indicates that they understand many words poorly and that our new probing task is a difficult challenge that could help guide research on LMs in the future.

Cite

CITATION STYLE

APA

Senel, L. K., & Schütze, H. (2021). Does he wink or does he nod? A challenging benchmark for evaluating word understanding of language models. In EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 532–538). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.eacl-main.42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free