Fleche: An Efcient GPU Embedding Cache for Personalized Recommendations

9Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep learning based models have dominated current production recommendation systems. However, the gap between CPU-side DRAM data accessing and GPU processing still impedes their inference performance. GPU-resident cache can bridge this gap, but we find that existing systems leave the benefits to cache the embedding table, a huge sparse structure, on GPU unexploited. In this paper, we present Fleche, a holistic cache scheme with detailed designs for efficient GPU-resident embedding caching. Fleche (1) uses one cache backend for all embedding tables to improve the total cache utilization, and (2) merges small kernel calls into one unitary call to reduce the overhead of kernel maintenance (e.g., kernel launching and synchronizing). Furthermore, we carefully design the cache query workflow for finer-grain parallelism. Evaluations with real-world datasets show that compared with the prior art, Fleche significantly improves the throughput of embedding layer by 2.0-5.4×, and gets up to 2.4× speedup of end-to-end inference throughput.

Cite

CITATION STYLE

APA

Xie, M., Lu, Y., Lin, J., Wang, Q., Gao, J., Ren, K., & Shu, J. (2022). Fleche: An Efcient GPU Embedding Cache for Personalized Recommendations. In EuroSys 2022 - Proceedings of the 17th European Conference on Computer Systems (pp. 402–416). Association for Computing Machinery, Inc. https://doi.org/10.1145/3492321.3519554

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free