Recently, self-attentive models have shown promise in sequential recommendation, given their potential to capture user long-term preferences and short-term dynamics simultaneously. Despite their success, we argue that self-attention modules, as a non-local operator, often fail to capture short-term user dynamics accurately due to a lack of inductive local bias. To examine our hypothesis, we conduct an analytical experiment on controlled 'short-term' scenarios. We observe a significant performance gap between self-attentive recommenders with and without local constraints, which implies that short-term user dynamics are not sufficiently learned by existing self-attentive recommenders. Motivated by this observation, we propose a simple framework, (Locker) for self-attentive recommenders in a plug-and-play fashion. By combining the proposed local encoders with existing global attention heads, Locker enhances short-term user dynamics modeling, while retaining the long-term semantics captured by standard self-attentive encoders. We investigate Locker with five different local methods, outperforming state-of-the-art self-attentive recom- menders on three datasets by 17.19% (NDCG@20) on average.
CITATION STYLE
He, Z., Zhao, H., Lin, Z., Wang, Z., Kale, A., & McAuley, J. (2021). Locker: Locally Constrained Self-Attentive Sequential Recommendation. In International Conference on Information and Knowledge Management, Proceedings (pp. 3088–3092). Association for Computing Machinery. https://doi.org/10.1145/3459637.3482136
Mendeley helps you to discover research relevant for your work.