Monitoring knowledge acquisition instead of evaluating knowledge bases

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Evaluating the success of a knowledge acquisition (KA) task is difficult and expensive. Most evaluation approaches rely on the expert themselves, either directly, or indirectly by relying on data previously prepared with the help of experts. In incremental KA, knowledge base (KB) errors are monitored and corrected by an expert. Thus, during its evolution a record of the knowledge based system (KBS) performance is usually easy to keep. We propose to integrate with the incremental KA process, an evaluation process based on a statistical analysis to estimate the effectiveness of the KBS, as the KBS is actually evolved. We tailor such an analysis for Ripple Down Rules (RDR), which is an effective incremental KA methodology where a record of the KBS performance can be easily derived and updated as new cases are processed by the system. An RDR KB is a collection of rules with hierarchical exceptions, which are entered and validated by the expert in the context of their use. This greatly facilitates the knowledge maintenance task which, characteristically in RDR, overlaps with the incremental KA process. The work in this paper aims to overlap evaluation with maintenance and development of the knowledge base. It also minimises the major expense in deploying the RDR KBS, that of keeping a domain expert on-line during maintenance and the initial period of deployment. The expert is not kept on-line longer than it is absolutely necessary. We use the structure and semantics of an evolving RDR KB, combined with proven machine learning statistical methods, to estimate the added value in every KB update, as the KB evolves. Using these values, the decisionmakers in the organisation employing the KBS can apply a cost-benefit analysis of the continuation of the incremental KA process. They can then determine when this process, involving keeping an expert on-line, should be terminated.

Cite

CITATION STYLE

APA

Beydoun, G., & Hoffmann, A. (2000). Monitoring knowledge acquisition instead of evaluating knowledge bases. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1937, pp. 387–402). Springer Verlag. https://doi.org/10.1007/3-540-39967-4_30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free