A comparative analysis of offline and online evaluations and discussion of research paper recommender system evaluation

82Citations
Citations of this article
159Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Offline evaluations are the most common evaluation method for research paper recommender systems. However, no thorough discussion on the appropriateness of offline evaluations has taken place, despite some voiced criticism. We conducted a study in which we evaluated various recommendation approaches with both offline and online evaluations. We found that results of offline and online evaluations often contradict each other. We discuss this finding in detail and conclude that offline evaluations may be inappropriate for evaluating research paper recommender systems, in many settings. © 2013 ACM.

Cite

CITATION STYLE

APA

Beel, J., Genzmehr, M., Langer, S., Nürnberger, A., & Gipp, B. (2013). A comparative analysis of offline and online evaluations and discussion of research paper recommender system evaluation. In ACM International Conference Proceeding Series (pp. 7–14). https://doi.org/10.1145/2532508.2532511

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free