This paper proposes K-NRM, a kernel based neural model for document ranking. Given a query and a set of documents, K-NRM uses a translation matrix that models word-level similarities via word embeddings, a new kernel-pooling technique that uses kernels to extract multi-level so. match features, and a learning-To-rank layer that combines those features into the final ranking score.The whole model is trained end-To-end. The ranking layer learns desired feature patterns from the pairwise ranking loss.The kernels transfer the feature patterns into so.-match targets at each similarity level and enforce them on the translation matrix.The word embeddings are tuned accordingly so that they can produce the desired so. matches. Experiments on a commercial search engine's query log demonstrate the improvements of K-NRM over prior feature-based and neural-based states-of-The-Art, and explain the source of K-NRM's advantage: Its kernel-guided embedding encodes a similarity metric tailored for matching query words to document words, and provides effective multi-level so. matches.
CITATION STYLE
Xiong, C., Dai, Z., Callan, J., Liu, Z., & Power, R. (2017). End-To-end neural ad-hoc ranking with kernel pooling. In SIGIR 2017 - Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 55–64). Association for Computing Machinery, Inc. https://doi.org/10.1145/3077136.3080809
Mendeley helps you to discover research relevant for your work.