UserBERT: Pre-training User Model with Contrastive Self-supervision

24Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

User modeling is critical for personalization. Existing methods usually train user models from task-specific labeled data, which may be insufficient. In fact, there are usually abundant unlabeled user behavior data that encode rich universal user information, and pre-training user models on them can empower user modeling in many downstream tasks. In this paper, we propose a user model pre-training method named UserBERT to learn universal user models on unlabeled user behavior data with two contrastive self-supervision tasks. The first one is masked behavior prediction and discrimination, aiming to model the contexts of user behaviors. The second one is behavior sequence matching, aiming to capture user interest stable in different periods. Besides, we propose a medium-hard negative sampling framework to select informative negative samples for better contrastive pre-training. Extensive experiments validate the effectiveness of UserBERT in user model pre-training.

Cite

CITATION STYLE

APA

Wu, C., Wu, F., Qi, T., & Huang, Y. (2022). UserBERT: Pre-training User Model with Contrastive Self-supervision. In SIGIR 2022 - Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 2087–2092). Association for Computing Machinery, Inc. https://doi.org/10.1145/3477495.3531810

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free