Fine-Tuning of Multilingual Models for Sentiment Classification in Code-Mixed Indian Language Texts

4Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We use XLM (Cross-lingual Language Model), a transformer-based model, to perform sentiment analysis on Kannada-English code-mixed texts. The model was fine-tuned for sentiment analysis using the KanCMD dataset. We assessed the model’s performance on English-only and Kannada-only scripts. Also, Malayalam and Tamil datasets were used to evaluate the model. Our work shows that transformer-based architectures for sequential classification tasks, at least for sentiment analysis, perform better than traditional machine learning solutions for code-mixed data.

Cite

CITATION STYLE

APA

Sanghvi, D., Fernandes, L. M., D’Souza, S., Vasaani, N., & Kavitha, K. M. (2023). Fine-Tuning of Multilingual Models for Sentiment Classification in Code-Mixed Indian Language Texts. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13776 LNCS, pp. 224–239). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-24848-1_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free