Abstract
Recent innovations in Transformer-based ranking models have advanced the state-of-the-art in information retrieval. However, these Transformers are computationally expensive, and their opaque hidden states make it hard to understand the ranking process. In this work, we modularize the Transformer ranker into separate modules for text representation and interaction. We show how this design enables substantially faster ranking using offline pre-computed representations and light-weight online interactions. The modular design is also easier to interpret and sheds light on the ranking process in Transformer rankers.
Cite
CITATION STYLE
Gao, L., Dai, Z., & Callan, J. (2020). Modularized transfomer-based ranking framework. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 4180–4190). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.342
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.