In this paper, we present our system for the SemEval 2020 task on code-mixed sentiment analysis. Our system makes use of large transformer based multilingual embeddings like mBERT. Recent work has shown that these models posses the ability to solve code-mixed tasks in addition to their originally demonstrated cross-lingual abilities. We evaluate the stock versions of these models for the sentiment analysis task and also show that their performance can be improved by using unlabelled code-mixed data. Our submission (username Genius1237) achieved the second rank on the English-Hindi subtask with an F1 score of 0.726.
CITATION STYLE
Srinivasan, A. (2020). MSR India at SemEval-2020 Task 9: Multilingual Models can do Code-Mixing too. In 14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings (pp. 951–956). International Committee for Computational Linguistics. https://doi.org/10.18653/v1/2020.semeval-1.122
Mendeley helps you to discover research relevant for your work.