Reflecting on a process to automatically evaluate ontological material generated automatically

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Ontology evaluation is a labour intensive and laborious job. Hence, it is relevant to investigate automated methods. But before an automated ontology evaluation method is considered reliable and consistent, it must be validated by human experts. In this paper we want to present a meta-analysis of an automated ontology evaluation procedure as it has been applied in earlier tests. It goes without saying that many of the principles touched upon can be applied in the context of ontology evaluation as such, irrespective of it being automated or not. Consequently, the overall quality of an ontology is not only determined by the quality of the artifact itself, but also by the the quality of its evaluation method. Providing an analysis on the set-up and conditions under which an evaluation of an ontology takes place can only be beneficial to the entire domain of ontology engineering. © 2010 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Spyns, P. (2010). Reflecting on a process to automatically evaluate ontological material generated automatically. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6428 LNCS, pp. 606–615). https://doi.org/10.1007/978-3-642-16961-8_85

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free