LMML at SemEval-2020 Task 7: Siamese Transformers for Rating Humor in Edited News Headlines

3Citations
Citations of this article
61Readers
Mendeley users who have this article in their library.

Abstract

This paper contains a description of my solution to the problem statement of SemEval 2020: Assessing the Funniness of Edited News Headlines. I propose a Siamese Transformer based approach, coupled with a Global Attention mechanism that makes use of contextual embeddings and focus words, to generate important features that are fed to a 2 layer perceptron to rate the funniness of the edited headline. I detail various experiments to show the performance of the system. The proposed approach outperforms a baseline Bi-LSTM architecture and finished 5th (out of 49 teams) in sub-task 1 and 4th (out of 32 teams) in sub-task 2 of the competition and was the best non-ensemble model in both tasks.

Cite

CITATION STYLE

APA

Ballapuram, P. (2020). LMML at SemEval-2020 Task 7: Siamese Transformers for Rating Humor in Edited News Headlines. In 14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings (pp. 1026–1032). International Committee for Computational Linguistics. https://doi.org/10.18653/v1/2020.semeval-1.134

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free