Are Multilingual Models Effective in Code-Switching?

37Citations
Citations of this article
76Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multilingual language models have shown decent performance in multilingual and cross-lingual natural language understanding tasks. However, the power of these multilingual models in code-switching tasks has not been fully explored. In this paper, we study the effectiveness of multilingual language models to understand their capability and adaptability to the mixed-language setting by considering the inference speed, performance, and number of parameters to measure their practicality. We conduct experiments in three language pairs on named entity recognition and part-of-speech tagging and compare them with existing methods, such as using bilingual embeddings and multilingual meta-embeddings. Our findings suggest that pre-trained multilingual models do not necessarily guarantee high-quality representations on code-switching, while using meta-embeddings achieves similar results with significantly fewer parameters.

Cite

CITATION STYLE

APA

Winata, G. I., Cahyawijaya, S., Liu, Z., Lin, Z., Madotto, A., & Fung, P. (2021). Are Multilingual Models Effective in Code-Switching? In Computational Approaches to Linguistic Code-Switching, CALCS 2021 - Proceedings of the 5th Workshop (pp. 142–153). Association for Computational Linguistics (ACL). https://doi.org/10.26615/978-954-452-056-4_020

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free