Laughing Heads: Can Transformers Detect What Makes a Sentence Funny?

1Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

The automatic detection of humor poses a grand challenge for natural language processing. Transformer-based systems have recently achieved remarkable results on this task, but they usually (1) were evaluated in setups where serious vs. humorous texts came from entirely different sources, and (2) focused on benchmarking performance without providing insights into how the models work. We make progress in both respects by training and analyzing transformer-based humor recognition models on a recently introduced dataset consisting of minimal pairs of aligned sentences, one serious, the other humorous. We find that, although our aligned dataset is much harder than previous datasets, transformer-based models recognize the humorous sentence in an aligned pair with high accuracy (78%). In a careful error analysis, we characterize easy vs. hard instances. Finally, by analyzing attention weights, we obtain important insights into the mechanisms by which transformers recognize humor. Most remarkably, we find clear evidence that one single attention head learns to recognize the words that make a test sentence humorous, even without access to this information at training time.

Cite

CITATION STYLE

APA

Peyrard, M., Borges, B., Gligoric, K., & West, R. (2021). Laughing Heads: Can Transformers Detect What Makes a Sentence Funny? In IJCAI International Joint Conference on Artificial Intelligence (pp. 3899–3905). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/537

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free