Déjà vu: A Contextualized Temporal Attention Mechanism for Sequential Recommendation

59Citations
Citations of this article
99Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Predicting users' preferences based on their sequential behaviors in history is challenging and crucial for modern recommender systems. Most existing sequential recommendation algorithms focus on transitional structure among the sequential actions, but largely ignore the temporal and context information, when modeling the influence of a historical event to current prediction. In this paper, we argue that the influence from the past events on a user's current action should vary over the course of time and under different context. Thus, we propose a Contextualized Temporal Attention Mechanism that learns to weigh historical actions' influence on not only what action it is, but also when and how the action took place. More specifically, to dynamically calibrate the relative input dependence from the self-attention mechanism, we deploy multiple parameterized kernel functions to learn various temporal dynamics, and then use the context information to determine which of these reweighing kernels to follow for each input. In empirical evaluations on two large public recommendation datasets, our model consistently outperformed an extensive set of state-of-the-art sequential recommendation methods.

Cite

CITATION STYLE

APA

Wu, J., Cai, R., & Wang, H. (2020). Déjà vu: A Contextualized Temporal Attention Mechanism for Sequential Recommendation. In The Web Conference 2020 - Proceedings of the World Wide Web Conference, WWW 2020 (pp. 2199–2209). Association for Computing Machinery, Inc. https://doi.org/10.1145/3366423.3380285

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free