Understanding BERT Rankers under Distillation

42Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep language models, such as BERT pre-trained on large corpora, have given a huge performance boost to state-of-the-art information retrieval ranking systems. Knowledge embedded in such models allows them to pick up complex matching signals between passages and queries. However, the high computation cost during inference limits their deployment in real-world search scenarios. In this paper, we study if and how the knowledge for search within BERT can be transferred to a smaller ranker through distillation. Our experiments demonstrate that it is crucial to use a proper distillation procedure, which produces up to nine times speedup while preserving the state-of-the-art performance.

Cite

CITATION STYLE

APA

Gao, L., Dai, Z., & Callan, J. (2020). Understanding BERT Rankers under Distillation. In ICTIR 2020 - Proceedings of the 2020 ACM SIGIR International Conference on Theory of Information Retrieval (pp. 149–152). Association for Computing Machinery. https://doi.org/10.1145/3409256.3409838

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free