DLJUST at SemEval-2021 Task 7: Hahackathon: Linking Humor and Offense

0Citations
Citations of this article
51Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Humor detection and rating poses interesting linguistic challenges to NLP; it is highly subjective depending on the perceptions of a joke and the context in which it is used. This paper utilizes and compares transformers models; BERT base and Large, BERTweet, RoBERTa base and Large, and RoBERTa base irony, for detecting and rating humor and offense. The proposed models, where given a text in cased and uncased type obtained from SemEval-2021 Task7: HaHackathon: Linking Humor and Offense Across Different Age Groups. The highest scored model for the first subtask: Humor Detection, is BERTweet base cased model with 0.9540 F1-score, for the second subtask: Average Humor Rating Score, it is BERT Large cased with the minimum RMSE of 0.5555, for the fourth subtask: Average Offensiveness Rating Score, it is BERTweet base cased model with minimum RMSE of 0.4822.

Cite

CITATION STYLE

APA

Al-Omari, H., AbedulNabi, I., & Duwairi, R. (2021). DLJUST at SemEval-2021 Task 7: Hahackathon: Linking Humor and Offense. In SemEval 2021 - 15th International Workshop on Semantic Evaluation, Proceedings of the Workshop (pp. 1114–1119). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.semeval-1.155

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free