Figurative Language Processing: A Linguistically Informed Feature Analysis of the Behavior of Language Models and Humans

2Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

Recent years have witnessed a growing interest in investigating what Transformer-based language models (TLMs) actually learn from the training data. This is especially relevant for complex tasks such as the understanding of non-literal meaning. In this work, we probe the performance of three black-box TLMs and two intrinsically transparent white-box models on figurative language classification of sarcasm, similes, idioms, and metaphors. We conduct two studies on the classification results to provide insights into the inner workings of such models. With our first analysis on feature importance, we identify crucial differences in model behavior. With our second analysis using an online experiment with human participants, we inspect different linguistic characteristics of the four figurative language types.

Cite

CITATION STYLE

APA

Jang, H., Yu, Q., & Frassinelli, D. (2023). Figurative Language Processing: A Linguistically Informed Feature Analysis of the Behavior of Language Models and Humans. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 9816–9832). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.622

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free