Recognizing reliability of discovered knowledge

4Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

When using discovered knowledge for decision making (e.g. classification in the case of machine learning), the question of reliability becomes very important. Unlike global view on the algorithms (evaluation of overall accuracy on some testing data) or unlike multistrategy learning (voting of more classifiers), we propose “local” evaluation for each example using one classifier. The basic idea is to learn to classify the correct decisions made by the classifier. This is done by creating new class attribute “match” and by running the learning algorithm on the same input attributes. We call this (second) step verification. Some first preliminary experimental results of this method used with C4.5 and CN4 are reported. These results show that: (1) if the classification accuracy is very high, it makes no sence to perform the verification step (since the verification step will create only the majority rule), (2) in multiple-class and/or noisy domains the verification accuracy can be significantly higher then the classification accuracy.

Cite

CITATION STYLE

APA

Berka, P. (1997). Recognizing reliability of discovered knowledge. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1263, pp. 307–314). Springer Verlag. https://doi.org/10.1007/3-540-63223-9_129

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free