Abstract
Document ranking aims at sorting a collection of documents with their relevance to a query. Contemporary methods explore more efficient transformers or divide long documents into passages to handle the long input. However, intensive query-irrelevant content may lead to harmful distraction and high query latency. Some recent works further propose cascade document ranking models that extract relevant passages with an efficient selector before ranking, however, their selection and ranking modules are almost independently optimized and deployed, leading to selecting error reinforcement and sub-optimal performance. In fact, the document ranker can provide fine-grained supervision to make the selector more generalizable and compatible, and the selector built upon a different structure can offer a distinct perspective to assist in document ranking. Inspired by this, we propose a fine-grained attention alignment approach to jointly optimize a cascade document ranking model. Specifically, we utilize the attention activations over the passages from the ranker as fine-grained attention feedback to optimize the selector. Meanwhile, we fuse the relevance scores from the passage selector into the ranker to assist in calculating the cooperative matching representation. Experiments on MS MARCO and TREC DL demonstrate the effectiveness of our method.
Cite
CITATION STYLE
Li, Z., Tao, C., Feng, J., Shen, T., Zhao, D., Geng, X., & Jiang, D. (2023). FAA: Fine-grained Attention Alignment for Cascade Document Ranking. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 1688–1700). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.94
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.