Adaptive Low-Precision Training for Embeddings in Click-Through Rate Prediction

5Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Embedding tables are usually huge in click-through rate (CTR) prediction models. To train and deploy the CTR models efficiently and economically, it is necessary to compress their embedding tables. To this end, we formulate a novel quantization training paradigm to compress the embeddings from the training stage, termed low-precision training (LPT). Also, we provide theoretical analysis on its convergence. The results show that stochastic weight quantization has a faster convergence rate and a smaller convergence error than deterministic weight quantization in LPT. Further, to reduce accuracy degradation, we propose adaptive low-precision training (ALPT) which learns the step size (i.e., the quantization resolution). Experiments on two real-world datasets confirm our analysis and show that ALPT can significantly improve the prediction accuracy, especially at extremely low bit width. For the first time in CTR models, we successfully train 8-bit embeddings without sacrificing prediction accuracy.

Cite

CITATION STYLE

APA

Li, S., Guo, H., Hou, L., Zhang, W., Tang, X., Tang, R., … Li, R. (2023). Adaptive Low-Precision Training for Embeddings in Click-Through Rate Prediction. In Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023 (Vol. 37, pp. 4435–4443). AAAI Press. https://doi.org/10.1609/aaai.v37i4.25564

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free