Metrics for evaluating the serendipity of recommendation lists

124Citations
Citations of this article
118Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we propose metrics unexpectedness and unexpectedness_r for measuring the serendipity of recommendation lists produced by recommender systems. Recommender systems have been evaluated in many ways. Although prediction quality is frequently measured by various accuracy metrics, recommender systems must be not only accurate but also useful. A few researchers have argued that the bottom-line measure of the success of a recommender system should be user satisfaction. The basic idea of our metrics is that unexpectedness is the distance between the results produced by the method to be evaluated and those produced by a primitive prediction method. Here, unexpectedness is a metric for a whole recommendation list, while unexpectedness_r is that taking into account the ranking in the list. From the viewpoints of both accuracy and serendipity, we evaluated the results obtained by three prediction methods in experimental studies on television program recommendations. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Murakami, T., Mori, K., & Orihara, R. (2008). Metrics for evaluating the serendipity of recommendation lists. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4914 LNAI, pp. 40–46). https://doi.org/10.1007/978-3-540-78197-4_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free