A new cross-validation technique to evaluate quality of recommender systems

21Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The topic of recommender systems is rapidly gaining interest in the user-behaviour modeling research domain. Over the years, various recommender algorithms based on different mathematical models have been introduced in the literature. Researchers interested in proposing a new recommender model or modifying an existing algorithm should take into account a variety of key performance indicators, such as execution time, recall and precision. Till date and to the best of our knowledge, no general cross-validation scheme to evaluate the performance of recommender algorithms has been developed. To fill this gap we propose an extension of conventional cross-validation. Besides splitting the initial data into training and test subsets, we also split the attribute description of the dataset into a hidden and visible part. We then discuss how such a splitting scheme can be applied in practice. Empirical validation is performed on traditional user-based and item-based recommender algorithms which were applied to the MovieLens dataset. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Ignatov, D. I., Poelmans, J., Dedene, G., & Viaene, S. (2012). A new cross-validation technique to evaluate quality of recommender systems. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7143 LNCS, pp. 195–202). https://doi.org/10.1007/978-3-642-27387-2_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free