Assessment of Massively Multilingual Sentiment Classifiers

1Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

Abstract

Models are increasing in size and complexity in the hunt for SOTA. But what if those 2% increase in performance does not make a difference in a production use case? Maybe benefits from a smaller, faster model outweigh those slight performance gains. Also, equally good performance across languages in multilingual tasks is more important than SOTA results on a single one. We present the biggest, unified, multilingual collection of sentiment analysis datasets. We use these to assess 11 models and 80 high-quality sentiment datasets (out of 342 raw datasets collected) in 27 languages and included results on the internally annotated datasets. We deeply evaluate multiple setups, including fine-tuning transformer-based models for measuring performance. We compare results in numerous dimensions addressing the imbalance in both languages coverage and dataset sizes. Finally, we present some best practices for working with such a massive collection of datasets and models from a multilingual perspective.

Cite

CITATION STYLE

APA

Rajda, K., Augustyniak, Ł., Gramacki, P., Gruza, M., Woźniak, S., & Kajdanowicz, T. (2022). Assessment of Massively Multilingual Sentiment Classifiers. In WASSA 2022 - 12th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, Proceedings of the Workshop (pp. 125–140). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.wassa-1.13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free