We generalize an information-based reward function, introduced by Good (1952), for use with machine learners of classification functions. We discuss the advantages of our function over predictive accuracy and the metric of Kononenko and Bratko (1991). We examine the use of information reward to evaluate popular machine learning algorithms (e.g., C5.0, Naive Bayes, CaMML) using UCI archive datasets, finding that the assessment implied by predictive accuracy is often reversed when using information reward.
CITATION STYLE
Hope, L. R., & Korb, K. B. (2002). Bayesian information reward. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2557, pp. 272–283). Springer Verlag. https://doi.org/10.1007/3-540-36187-1_24
Mendeley helps you to discover research relevant for your work.