ContrastVAE: Contrastive Variational AutoEncoder for Sequential Recommendation

31Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Aiming at exploiting the rich information in user behaviour sequences, sequential recommendation has been widely adopted in real-world recommender systems. However, current methods suffer from the following issues: 1) sparsity of user-item interactions, 2) uncertainty of sequential records, 3) long-tail items. In this paper, we propose to incorporate contrastive learning into the framework of Variational AutoEncoders to address these challenges simultaneously. Firstly, we introduce ContrastELBO, a novel training objective that extends the conventional single-view ELBO to two-view case and theoretically builds a connection between VAE and contrastive learning from a two-view perspective. Then we propose Contrastive Variational AutoEncoder (ContrastVAE in short), a two-branched VAE model with contrastive regularization as an embodiment of ContrastELBO for sequential recommendation. We further introduce two simple yet effective augmentation strategies named model augmentation and variational augmentation to create a second view of a sequence and thus making contrastive learning possible. Experiments on four benchmark datasets demonstrate the effectiveness of ContrastVAE and the proposed augmentation methods. Codes are available at https://github.com/YuWang-1024/ContrastVAE

Cite

CITATION STYLE

APA

Wang, Y., Zhang, H., Liu, Z., Yang, L., & Yu, P. S. (2022). ContrastVAE: Contrastive Variational AutoEncoder for Sequential Recommendation. In International Conference on Information and Knowledge Management, Proceedings (pp. 2057–2067). Association for Computing Machinery. https://doi.org/10.1145/3511808.3557268

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free