Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes

2Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

Providing better language tools for low-resource and endangered languages is imperative for equitable growth. Recent progress with massively multilingual pretrained models has proven surprisingly effective at performing zero-shot transfer to a wide variety of languages. However, this transfer is not universal, with many languages not currently understood by multilingual approaches. It is estimated that only 72 languages possess a “small set of labeled datasets” on which we could test a model's performance, the vast majority of languages not having the resources available to simply evaluate performances on. In this work, we attempt to clarify which languages do and do not currently benefit from such transfer. To that end, we develop a general approach that requires only unlabelled text to detect which languages are not well understood by a cross-lingual model. Our approach is derived from the hypothesis that if a model's understanding is insensitive to perturbations to text in a language, it is likely to have a limited understanding of that language. We construct a cross-lingual sentence similarity task to evaluate our approach empirically on 350, primarily low-resource, languages.

Cite

CITATION STYLE

APA

Clouâtre, L., Parthasarathi, P., Zouaq, A., & Chandar, S. (2022). Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 5404–5425). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.393

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free