This paper presents the 10th and 11th place system for Subtask A - English and Subtask A - Arabic respectively of the SemEval 2022 - Task 6. The purpose of the Subtask A was to classify a given text sequence into sarcastic and non-sarcastic. We also breifly cover our method for Subtask B which performed subpar when compared with most of the submissions on the official leaderboard. All of the developed solutions used a transformers based language model for encoding the text sequences with necessary changes of the pretrained weights and classifier according to the language and subtask at hand.
CITATION STYLE
Singh, N. (2022). niksss at SemEval-2022 Task 6: Are Traditionally Pre-Trained Contextual Embeddings Enough for Detecting Intended Sarcasm ? In SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop (pp. 907–911). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.semeval-1.127
Mendeley helps you to discover research relevant for your work.