This paper presents the systems and approaches of the Arizonans team for SemEval 2023 Task 9: Multilingual Tweet Intimacy Analysis. We finetune the Multilingual RoBERTa model trained with about 200M tweets, XLM-T. Our final model ranked 9th out of 45 overall, 13th in seen languages, and 8th in unseen languages.
CITATION STYLE
Bozdag, N. B., Bilgis, T., & Bethard, S. (2023). Arizonans at SemEval-2023 Task 9: Multilingual Tweet Intimacy Analysis with XLM-T. In 17th International Workshop on Semantic Evaluation, SemEval 2023 - Proceedings of the Workshop (pp. 1656–1659). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.semeval-1.230
Mendeley helps you to discover research relevant for your work.