ROZAM at SemEval-2023 Task 9: Multilingual Tweet Intimacy Analysis

1Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

We build a model using large multilingual pre-trained language model XLM-T for regression task and fine-tune it on the MINT (Multilingual INTmacy) analysis dataset which covers 6 languages for training and 4 languages for testing zero-shot performance of the model. The dataset was annotated and the annotations are intimacy scores. We experiment with several deep learning architectures to predict intimacy score. To achieve optimal performance we modify several model settings including loss function, number and type of layers. In total, we ran 16 end-to-end experiments. Our best system achieved a Pearson Correlation score of 0.52.

Cite

CITATION STYLE

APA

Rostamkhani, M., Zamaninejad, G., & Eetemadi, S. (2023). ROZAM at SemEval-2023 Task 9: Multilingual Tweet Intimacy Analysis. In 17th International Workshop on Semantic Evaluation, SemEval 2023 - Proceedings of the Workshop (pp. 2029–2032). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.semeval-1.278

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free