An analysis of language models for metaphor recognition

18Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.

Abstract

We conduct a linguistic analysis of recent metaphor recognition systems, all of which are based on language models. We show that their performance, although reaching high F-scores, has considerable gaps from a linguistic perspective. First, they perform substantially worse on unconventional metaphors than on conventional ones. Second, they struggle with handling rarer word types. These two findings together suggest that a large part of the systems’ success is due to optimising the disambiguation of conventionalised, metaphoric word senses for specific words instead of modelling general properties of metaphors. As a positive result, the systems show increasing capabilities to recognise metaphoric readings of unseen words if synonyms or morphological variations of these words have been seen before, leading to enhanced generalisation beyond word sense disambiguation.

Cite

CITATION STYLE

APA

Neidlein, A., Wiesenbach, P., & Markert, K. (2020). An analysis of language models for metaphor recognition. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 3722–3736). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.332

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free