Enterprise-Scale Search: Accelerating Inference for Sparse Extreme Multi-Label Ranking Trees

3Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Tree-based models underpin many modern semantic search engines and recommender systems due to their sub-linear inference times. In industrial applications, these models operate at extreme scales, where every bit of performance is critical. Memory constraints at extreme scales also require that models be sparse, hence tree-based models are often back-ended by sparse matrix algebra routines. However, there are currently no sparse matrix techniques specifically designed for the sparsity structure one encounters in tree-based models for extreme multi-label ranking/classification (XMR/XMC) problems. To address this issue, we present the masked sparse chunk multiplication (MSCM) technique, a sparse matrix technique specifically tailored to XMR trees. MSCM is easy to implement, embarrassingly parallelizable, and offers a significant performance boost to any existing tree inference pipeline at no cost. We perform a comprehensive study of MSCM applied to several different sparse inference schemes and benchmark our methods on a general purpose extreme multi-label ranking framework. We observe that MSCM gives consistently dramatic speedups across both the online and batch inference settings, single- and multi-threaded settings, and on many different tree models and datasets. To demonstrate its utility in industrial applications, we apply MSCM to an enterprise-scale semantic product search problem with 100 million products and achieve sub-millisecond latency of 0.88 ms per query on a single thread - an 8x reduction in latency over vanilla inference techniques. The MSCM technique requires absolutely no sacrifices to model accuracy as it gives exactly the same results as standard sparse matrix techniques. Therefore, we believe that MSCM will enable users of XMR trees to save a substantial amount of compute resources in their inference pipelines at very little cost. Our code is publicly available at https://github.com/amzn/pecos , as well as our complete benchmarks and code for reproduction at https://github.com/UniqueUpToPermutation/pecos/tree/benchmark.

Cite

CITATION STYLE

APA

Etter, P. A., Zhong, K., Yu, H. F., Ying, L., & Dhillon, I. (2022). Enterprise-Scale Search: Accelerating Inference for Sparse Extreme Multi-Label Ranking Trees. In WWW 2022 - Proceedings of the ACM Web Conference 2022 (pp. 452–461). Association for Computing Machinery, Inc. https://doi.org/10.1145/3485447.3511973

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free