Multi-View Feature Representation for Dialogue Generation with Bidirectional Distillation

11Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.

Abstract

Neural dialogue models suffer from low-quality responses when interacted in practice, demonstrating difficulty in generalization beyond training data. Recently, knowledge distillation has been used to successfully regularize the student by transferring knowledge from the teacher. However, the teacher and the student are trained on the same dataset and tend to learn similar feature representations, whereas the most general knowledge should be found through differences. The finding of general knowledge is further hindered by the unidirectional distillation, as the student should obey the teacher and may discard some knowledge that is truly general but refuted by the teacher. To this end, we propose a novel training framework, where the learning of general knowledge is more in line with the idea of reaching consensus, i.e., finding common knowledge that is beneficial to different yet all datasets through diversified learning partners. Concretely, the training task is divided into a group of subtasks with the same number of students. Each student assigned to one subtask not only is optimized on the allocated subtask but also imitates multiview feature representation aggregated from other students (i.e., student peers), which induces students to capture common knowledge among different subtasks and alleviates the over-fitting of students on the allocated subtasks. To further enhance generalization, we extend the unidirectional distillation to the bidirectional distillation that encourages the student and its student peers to co-evolve by exchanging complementary knowledge with each other. Empirical results and analysis demonstrate that our training framework effectively improves the model generalization without sacrificing training efficiency.

Cite

CITATION STYLE

APA

Feng, S., Ren, X., Li, K., & Sun, X. (2021). Multi-View Feature Representation for Dialogue Generation with Bidirectional Distillation. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 14B, pp. 12812–12820). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i14.17516

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free