Distilling Knowledge for Search-based Structured Prediction

14Citations
Citations of this article
131Readers
Mendeley users who have this article in their library.

Abstract

Many natural language processing tasks can be modeled into structured prediction and solved as a search problem. In this paper, we distill an ensemble of multiple models trained with different initialization into a single model. In addition to learning to match the ensemble's probability output on the reference states, we also use the ensemble to explore the search space and learn from the encountered states in the exploration. Experimental results on two typical search-based structured prediction tasks - transition-based dependency parsing and neural machine translation show that distillation can effectively improve the single model's performance and the final model achieves improvements of 1.32 in LAS and 2.65 in BLEU score on these two tasks respectively over strong baselines and it outperforms the greedy structured prediction models in previous litera-tures.

Cite

CITATION STYLE

APA

Liu, Y., Che, W., Zhao, H., Qin, B., & Liu, T. (2018). Distilling Knowledge for Search-based Structured Prediction. In ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) (Vol. 1, pp. 1393–1402). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p18-1129

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free