On the discriminative power of hyper-parameters in cross-validation and how to choose them

26Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Hyper-parameters tuning is a crucial task to make a model perform at its best. However, despite the well-established methodologies, some aspects of the tuning remain unexplored. As an example, it may afect not just accuracy but also novelty as well as it may depend on the adopted dataset. Moreover, sometimes it could be sufcient to concentrate on a single parameter only (or a few of them) instead of their overall set. In this paper we report on our investigation on hyper-parameters tuning by performing an extensive 10-Folds Cross-Validation on MovieLens and Amazon Movies for three well-known baselines: User-kNN, Item-kNN, BPR-MF. We adopted a grid search strategy considering approximately 15 values for each parameter, and we then evaluated each combination of parameters in terms of accuracy and novelty. We investigated the discriminative power of nDCG, Precision, Recall, MRR, EFD, EPC, and, fnally, we analyzed the role of parameters on model evaluation for Cross-Validation.

Cite

CITATION STYLE

APA

Anelli, V. W., Noia, T. D., Sciascio, E. D., Pomo, C., & Ragone, A. (2019). On the discriminative power of hyper-parameters in cross-validation and how to choose them. In RecSys 2019 - 13th ACM Conference on Recommender Systems (pp. 447–451). Association for Computing Machinery, Inc. https://doi.org/10.1145/3298689.3347010

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free