Memory-Augmented Attention Network for Sequential Recommendation

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An increased interest in sequential recommendation has been observed in recent years. Many models have been proposed to leverage the sequential user-item interaction data, which includes those based on Markov Chain or recurrent neural networks. Most of these models are designed for the scenario where each historical record composed of single item. However, the records could be a subset of items (or session) such as music playlists and baskets in e-commerce applications. How to leverage the session structure to improve the effectiveness of the recommendation system is a challenge. To this end, we propose a MEmory-augmented Attention Network for Sequential recommendation (MEANS), to effectively recommend next items given the sequential session data. The most recent sessions are stored into external memory after a max-pooling operation. The long-term user preference are learned through an attention network which is stacked on the memory layer. Finally, the mixture of long-term and short-term preference is feeded into the prediction layer to make recommendations. Extensive experiments on four real datasets show that MEANS outperforms various state-of-the-art sequential recommendation models.

Cite

CITATION STYLE

APA

Hu, C., He, P., Sha, C., & Niu, J. (2019). Memory-Augmented Attention Network for Sequential Recommendation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11881 LNCS, pp. 228–242). Springer. https://doi.org/10.1007/978-3-030-34223-4_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free