Attention Mixtures for Time-Aware Sequential Recommendation

11Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Transformers emerged as powerful methods for sequential recommendation. However, existing architectures often overlook the complex dependencies between user preferences and the temporal context. In this short paper, we introduce MOJITO, an improved Transformer sequential recommender system that addresses this limitation. MOJITO leverages Gaussian mixtures of attention-based temporal context and item embedding representations for sequential modeling. Such an approach permits to accurately predict which items should be recommended next to users depending on past actions and the temporal context. We demonstrate the relevance of our approach, by empirically outperforming existing Transformers for sequential recommendation on several real-world datasets.

Cite

CITATION STYLE

APA

Tran, V. A., Salha-Galvan, G., Sguerra, B., & Hennequin, R. (2023). Attention Mixtures for Time-Aware Sequential Recommendation. In SIGIR 2023 - Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 1821–1826). Association for Computing Machinery, Inc. https://doi.org/10.1145/3539618.3591951

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free