Distilling Efficient Language-Specific Models for Cross-Lingual Transfer

1Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Massively multilingual Transformers (MMTs), such as mBERT and XLM-R, are widely used for cross-lingual transfer learning. While these are pretrained to represent hundreds of languages, end users of NLP systems are often interested only in individual languages. For such purposes, the MMTs' language coverage makes them unnecessarily expensive to deploy in terms of model size, inference time, energy, and hardware cost. We thus propose to extract compressed, language-specific models from MMTs which retain the capacity of the original MMTs for cross-lingual transfer. This is achieved by distilling the MMT bilingually, i.e., using data from only the source and target language of interest. Specifically, we use a two-phase distillation approach, termed BIS-TILLATION: (i) the first phase distils a general bilingual model from the MMT, while (ii) the second, task-specific phase sparsely fine-tunes the bilingual 'student' model using a task-tuned variant of the original MMT as its 'teacher'. We evaluate this distillation technique in zero-shot cross-lingual transfer across a number of standard cross-lingual benchmarks. The key results indicate that the distilled models exhibit minimal degradation in target language performance relative to the base MMT despite being significantly smaller and faster. Furthermore, we find that they outperform multilingually distilled models such as DistilmBERT and MiniLMv2 while having a very modest training budget in comparison, even on a per-language basis. We also show that bilingual models distilled from MMTs greatly outperform bilingual models trained from scratch.

Cite

CITATION STYLE

APA

Ansell, A., Ponti, E. M., Korhonen, A., & Vulić, I. (2023). Distilling Efficient Language-Specific Models for Cross-Lingual Transfer. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 8147–8165). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.517

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free