Disentangled Self-Attentive Neural Networks for Click-Through Rate Prediction

29Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Click-Through Rate (CTR) prediction, whose aim is to predict the probability of whether a user will click on an item, is an essential task for many online applications. Due to the nature of data sparsity and high dimensionality of CTR prediction, a key to making effective prediction is to model high-order feature interaction. An efficient way to do this is to perform inner product of feature embeddings with self-attentive neural networks. To better model complex feature interaction, in this paper we propose a novel DisentanglEd Self-atTentIve NEtwork (DESTINE) framework for CTR prediction that explicitly decouples the computation of unary feature importance from pairwise interaction. Specifically, the unary term models the general importance of one feature on all other features, whereas the pairwise interaction term contributes to learning the pure impact for each feature pair. We conduct extensive experiments using two real-world benchmark datasets. The results show that DESTINE not only maintains computational efficiency but achieves consistent improvements over state-of-the-art baselines.

Cite

CITATION STYLE

APA

Xu, Y., Zhu, Y., Yu, F., Liu, Q., & Wu, S. (2021). Disentangled Self-Attentive Neural Networks for Click-Through Rate Prediction. In International Conference on Information and Knowledge Management, Proceedings (pp. 3553–3557). Association for Computing Machinery. https://doi.org/10.1145/3459637.3482088

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free