Leveraging Large Language Models for Sequential Recommendation

28Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Sequential recommendation problems have received increasing attention in research during the past few years, leading to the inception of a large variety of algorithmic approaches. In this work, we explore how large language models (LLMs), which are nowadays introducing disruptive effects in many AI-based applications, can be used to build or improve sequential recommendation approaches. Specifically, we devise and evaluate three approaches to leverage the power of LLMs in different ways. Our results from experiments on two datasets show that initializing the state-of-the-art sequential recommendation model BERT4Rec with embeddings obtained from an LLM improves NDCG by 15-20% compared to the vanilla BERT4Rec model. Furthermore, we find that a simple approach that leverages LLM embeddings for producing recommendations, can provide competitive performance by highlighting semantically related items. We publicly share the code and data of our experiments to ensure reproducibility.1

Cite

CITATION STYLE

APA

Harte, J., Zorgdrager, W., Louridas, P., Katsifodimos, A., Jannach, D., & Fragkoulis, M. (2023). Leveraging Large Language Models for Sequential Recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems, RecSys 2023 (pp. 1096–1102). Association for Computing Machinery, Inc. https://doi.org/10.1145/3604915.3610639

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free