Evaluating Strategies for Selecting Test Datasets in Recommender Systems

N/ACitations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recommender systems based on collaborative filtering are widely used to predict users’ behaviour in large databases, where users rate items. The prediction model is built from a training dataset according to matrix factorization method and validated using a test dataset in order to measure the prediction error. Random selection is the most simple and instinctive way to build test datasets. Nevertheless, we could think about other deterministic methods to select test ratings uniformly along the database, in order to obtain a balanced contribution from all the users and items. In this paper, we perform several experiments of validating recommender systems using random and deterministic strategies to select test datasets. We considered a zigzag deterministic strategy that selects ratings uniformly across the rows and columns of the ratings matrix, following a diagonal path. After analysing the statistical results, we conclude that there are no particular advantages in considering the deterministic strategy.

Cite

CITATION STYLE

APA

Pajuelo-Holguera, F., Gómez-Pulido, J. A., & Ortega, F. (2019). Evaluating Strategies for Selecting Test Datasets in Recommender Systems. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11734 LNAI, pp. 243–253). Springer Verlag. https://doi.org/10.1007/978-3-030-29859-3_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free