While recent benchmarks have spurred a lot of new work on improving the generalization of pretrained multilingual language models on multilingual tasks, techniques to improve code-switched natural language understanding tasks have been far less explored. In this work, we propose the use of bilingual intermediate pretraining as a reliable technique to derive large and consistent performance gains using code-switched text on three different NLP tasks: Natural Language Inference (NLI), Question Answering (QA) and Sentiment Analysis (SA). We show consistent performance gains on four different code-switched language-pairs (Hindi-English, Spanish-English, Tamil-English and Malayalam-English) for SA and on Hindi-English for NLI and QA. We also present a code-switched masked language modeling (MLM) pretraining technique that consistently benefits SA compared to standard MLM pretraining using real code-switched text.
CITATION STYLE
Prasad, A., Rehan, M. A., Pathak, S., & Jyothi, P. (2021). The Effectiveness of Intermediate-Task Training for Code-Switched Natural Language Understanding. In MRL 2021 - 1st Workshop on Multilingual Representation Learning, Proceedings of the Conference (pp. 176–190). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.mrl-1.16
Mendeley helps you to discover research relevant for your work.