Tutoring Helps Students Learn Better: Improving Knowledge Distillation for BERT with Tutor Network

2Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Pre-trained language models have achieved remarkable successes in natural language processing tasks, coming at the cost of increasing model size. To address this issue, knowledge distillation (KD) has been widely applied to compress language models. However, typical KD approaches for language models have overlooked the difficulty of training examples, suffering from incorrect teacher prediction transfer and sub-efficient training. In this paper, we propose a novel KD framework, Tutor-KD, which improves the distillation effectiveness by controlling the difficulty of training examples during pre-training. We introduce a tutor network that generates samples that are easy for the teacher but difficult for the student, with training on a carefully designed policy gradient method. Experimental results show that TutorKD significantly and consistently outperforms the state-of-the-art KD methods with variously sized student models on the GLUE benchmark, demonstrating that the tutor can effectively generate training examples for the student.

Cite

CITATION STYLE

APA

Kim, J., Park, J. H., Lee, M., Mok, W. L., Choi, J. Y., & Lee, S. K. (2022). Tutoring Helps Students Learn Better: Improving Knowledge Distillation for BERT with Tutor Network. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 7371–7382). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.498

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free