A howling success or a working sea? Testing what BERT knows about metaphors

11Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

Abstract

Metaphor is a widespread linguistic and cognitive phenomenon that is ruled by mechanisms which have received attention in the literature. Transformer Language Models such as BERT have brought improvements in metaphor-related tasks. However, they have been used only in application contexts, while their knowledge of the phenomenon has not been analyzed. To test what BERT knows about metaphors, we challenge it on a new dataset that we designed to test various aspects of this phenomenon such as variations in linguistic structure, variations in conventionality, the boundaries of the plausibility of a metaphor and the interpretations that we attribute to metaphoric expressions. Results bring out some tendencies that suggest that the model can reproduce some human intuitions about metaphors.

Cite

CITATION STYLE

APA

Pedinotti, P., Di Palma, E., Cerini, L., & Lenci, A. (2021). A howling success or a working sea? Testing what BERT knows about metaphors. In BlackboxNLP 2021 - Proceedings of the 4th BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (pp. 192–204). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.blackboxnlp-1.13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free