Why skip if you can combine: A simple knowledge distillation technique for intermediate layers

21Citations
Citations of this article
95Readers
Mendeley users who have this article in their library.

Abstract

With the growth of computing power neural machine translation (NMT) models also grow accordingly and become better. However, they also become harder to deploy on edge devices due to memory constraints. To cope with this problem, a common practice is to distill knowledge from a large and accurately-trained teacher network (T ) into a compact student network (S). Although knowledge distillation (KD) is useful in most cases, our study shows that existing KD techniques might not be suitable enough for deep NMT engines, so we propose a novel alternative. In our model, besides matching T and S predictions we have a combinatorial mechanism to inject layer-level supervision from T to S. In this paper, we target low-resource settings and evaluate our translation engines for Portuguese→English, Turkish→English, and English→German directions. Students trained using our technique have 50% fewer parameters and can still deliver comparable results to those of 12-layer teachers.

References Powered by Scopus

77659Citations
26300Readers
Get full text

SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing

2101Citations
1130Readers

A Call for Clarity in Reporting BLEU Scores

2001Citations
422Readers

Cited by Powered by Scopus

ALP-KD: Attention-Based Layer Projection for Knowledge Distillation

85Citations
63Readers

Deep versus Wide: An Analysis of Student Architectures for Task-Agnostic Knowledge Distillation of Self-Supervised Speech Models

20Citations
16Readers
Get full text

SMaLL-100: Introducing Shallow Multilingual Machine Translation Model for Low-Resource Languages

20Citations
36Readers

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Wu, Y., Passban, P., Rezagholizadeh, M., & Liu, Q. (2020). Why skip if you can combine: A simple knowledge distillation technique for intermediate layers. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 1016–1021). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.74

Readers over time

‘20‘21‘22‘23‘24‘2508162432

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 29

73%

Researcher 7

18%

Lecturer / Post doc 3

8%

Professor / Associate Prof. 1

3%

Readers' Discipline

Tooltip

Computer Science 40

78%

Linguistics 6

12%

Engineering 3

6%

Business, Management and Accounting 2

4%

Save time finding and organizing research with Mendeley

Sign up for free
0