Hard Gate Knowledge Distillation - Leverage Calibration for a Robust and Reliable Language Model

5Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In knowledge distillation, a student model is trained with supervisions from both knowledge from a teacher and observations drawn from a training data distribution. Knowledge of a teacher is considered a subject that holds inter-class relations which send a meaningful supervision to a student; hence, much effort has been put to find such knowledge to be distilled. In this paper, we explore a question that has been given little attention: “when to distill such knowledge." The question is answered in our work with the concept of model calibration; we view a teacher model not only as a source of knowledge but also as a gauge to detect miscalibration of a student. This simple and yet novel view leads to a hard gate knowledge distillation scheme that switches between learning from a teacher model and training data. We verify the gating mechanism in the context of natural language generation at both the token-level and the sentence-level. Empirical comparisons with strong baselines show that hard gate knowledge distillation not only improves model generalization, but also significantly lowers model calibration error.

Cite

CITATION STYLE

APA

Lee, D., Tian, Z., Zhao, Y., Cheung, K. C., & Zhang, N. L. (2022). Hard Gate Knowledge Distillation - Leverage Calibration for a Robust and Reliable Language Model. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 9793–9803). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.665

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free