AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression

3Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.

Abstract

Knowledge distillation has attracted a great deal of interest recently to compress pre-trained language models. However, existing knowledge distillation methods suffer from two limitations. First, the student model simply imitates the teacher's behavior while ignoring the underlying reasoning. Second, these methods usually focus on the transfer of sophisticated model-specific knowledge but overlook data-specific knowledge. In this paper, we present a novel attribution-driven knowledge distillation approach, which explores the token-level rationale behind the teacher model based on Integrated Gradients (IG) and transfers attribution knowledge to the student model. To enhance the knowledge transfer of model reasoning and generalization, we further explore multi-view attribution distillation on all potential decisions of the teacher. Comprehensive experiments are conducted with BERT on the GLUE benchmark. The experimental results demonstrate the superior performance of our approach to several state-of-the-art methods.

Cite

CITATION STYLE

APA

Wu, S., Chen, H., Quan, X., Wang, Q., & Wang, R. (2023). AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 8449–8465). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.471

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free