Why skip if you can combine: A simple knowledge distillation technique for intermediate layers

21Citations
Citations of this article
90Readers
Mendeley users who have this article in their library.

Abstract

With the growth of computing power neural machine translation (NMT) models also grow accordingly and become better. However, they also become harder to deploy on edge devices due to memory constraints. To cope with this problem, a common practice is to distill knowledge from a large and accurately-trained teacher network (T ) into a compact student network (S). Although knowledge distillation (KD) is useful in most cases, our study shows that existing KD techniques might not be suitable enough for deep NMT engines, so we propose a novel alternative. In our model, besides matching T and S predictions we have a combinatorial mechanism to inject layer-level supervision from T to S. In this paper, we target low-resource settings and evaluate our translation engines for Portuguese→English, Turkish→English, and English→German directions. Students trained using our technique have 50% fewer parameters and can still deliver comparable results to those of 12-layer teachers.

Cite

CITATION STYLE

APA

Wu, Y., Passban, P., Rezagholizadeh, M., & Liu, Q. (2020). Why skip if you can combine: A simple knowledge distillation technique for intermediate layers. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 1016–1021). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.74

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free