DeText: A Deep Text Ranking Framework with BERT

17Citations
Citations of this article
81Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Ranking is the most important component in a search system. Most search systems deal with large amounts of natural language data, hence an effective ranking system requires a deep understanding of text semantics. Recently, deep learning based natural language processing (deep NLP) models have generated promising results on ranking systems. BERT is one of the most successful models that learn contextual embedding, which has been applied to capture complex query-document relations for search ranking. However, this is generally done by exhaustively interacting each query word with each document word, which is inefficient for online serving in search product systems. In this paper, we investigate how to build an efficient BERT-based ranking model for industry use cases. The solution is further extended to a general ranking framework, DeText, that is open sourced and can be applied to various ranking productions. Offline and online experiments of DeText on three real-world search systems present significant improvement over state-of-the-art approaches.

Cite

CITATION STYLE

APA

Guo, W., Liu, X., Wang, S., Gao, H., Sankar, A., Yang, Z., … Agarwal, D. (2020). DeText: A Deep Text Ranking Framework with BERT. In International Conference on Information and Knowledge Management, Proceedings (pp. 2509–2516). Association for Computing Machinery. https://doi.org/10.1145/3340531.3412699

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free