Locker: Locally Constrained Self-Attentive Sequential Recommendation

44Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Recently, self-attentive models have shown promise in sequential recommendation, given their potential to capture user long-term preferences and short-term dynamics simultaneously. Despite their success, we argue that self-attention modules, as a non-local operator, often fail to capture short-term user dynamics accurately due to a lack of inductive local bias. To examine our hypothesis, we conduct an analytical experiment on controlled 'short-term' scenarios. We observe a significant performance gap between self-attentive recommenders with and without local constraints, which implies that short-term user dynamics are not sufficiently learned by existing self-attentive recommenders. Motivated by this observation, we propose a simple framework, (Locker) for self-attentive recommenders in a plug-and-play fashion. By combining the proposed local encoders with existing global attention heads, Locker enhances short-term user dynamics modeling, while retaining the long-term semantics captured by standard self-attentive encoders. We investigate Locker with five different local methods, outperforming state-of-the-art self-attentive recom- menders on three datasets by 17.19% (NDCG@20) on average.

Cite

CITATION STYLE

APA

He, Z., Zhao, H., Lin, Z., Wang, Z., Kale, A., & McAuley, J. (2021). Locker: Locally Constrained Self-Attentive Sequential Recommendation. In International Conference on Information and Knowledge Management, Proceedings (pp. 3088–3092). Association for Computing Machinery. https://doi.org/10.1145/3459637.3482136

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free