Relevance as a metric for evaluating machine learning algorithms

12Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In machine learning, the choice of a learning algorithm that is suitable for the application domain is critical. The performance metric used to compare different algorithms must also reflect the concerns of users in the application domain under consideration. In this paper, we propose a novel probability-based performance metric called Relevance Score for evaluating supervised learning algorithms. We evaluate the proposed metric through empirical analysis on a dataset gathered from an intelligent lighting pilot installation. In comparison to the commonly used Classification Accuracy metric, the Relevance Score proves to be more appropriate for a certain class of applications. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Gopalakrishna, A. K., Ozcelebi, T., Liotta, A., & Lukkien, J. J. (2013). Relevance as a metric for evaluating machine learning algorithms. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7988 LNAI, pp. 195–208). https://doi.org/10.1007/978-3-642-39712-7_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free