Ontology evaluation is a critical task, even more so when the ontology is the output of an automatic system, rather than the result of a conceptualisation effort produced by a team of domain specialists and knowledge engineers. This paper provides an evaluation of the OntoLearn ontology learning system. The proposed evaluation strategy is twofold: first, we provide a detailed quantitative analysis of the ontology learning algorithms, in order to compute the accuracy of OntoLearn under different learning circumstances. Second, we automatically generate natural language descriptions of formal concept specifications, in order to facilitate per-concept qualitative analysis by domain specialists.
CITATION STYLE
Navigli, R., Velardi, P., Cucchiarelli, A., & Neri, F. (2004). Quantitative and qualitative evaluation of the OntoLearn ontology learning system. In COLING 2004 - Proceedings of the 20th International Conference on Computational Linguistics. Association for Computational Linguistics (ACL). https://doi.org/10.3115/1220355.1220505
Mendeley helps you to discover research relevant for your work.