Tensor-Based Sequential Learning via Hankel Matrix Representation for Next Item Recommendations

3Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Self-attentive transformer models have recently been shown to solve the next item recommendation task very efficiently. The learned attention weights capture sequential dynamics in user behavior and generalize well. Motivated by the special structure of learned parameter space, we question if it is possible to mimic it with an alternative and more lightweight approach. We develop a new tensor factorization-based model that ingrains the structural knowledge about sequential data within the learning process. We demonstrate how certain properties of a self-attention network can be reproduced with our approach based on special Hankel matrix representation. The resulting model has a shallow linear architecture. Remarkably, it achieves significant speedups in training time over its neural counterpart and performs competitively in terms of the quality of recommendations.

Cite

CITATION STYLE

APA

Frolov, E., & Oseledets, I. (2023). Tensor-Based Sequential Learning via Hankel Matrix Representation for Next Item Recommendations. IEEE Access, 11, 6357–6371. https://doi.org/10.1109/ACCESS.2023.3234863

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free