Evaluation measures for ontology matchers in supervised matching scenarios

2Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Precision and Recall, as well as their combination in terms of F-Measure, are widely used measures in computer science and generally applied to evaluate the overall performance of ontology matchers in fully automatic, unsupervised scenarios. In this paper, we investigate the case of supervised matching, where automatically created ontology alignments are verified by an expert. We motivate and describe this use case and its characteristics and discuss why traditional, F-measure based evaluation measures are not suitable for this use case. Therefore, we investigate several alternative evaluation measures and propose the use of Precision@N curves as a mean to assess different matching systems for supervised matching. We compare the ranking of several state of the art matchers using Precision@N curves to the traditional F-measure based ranking, and discuss means to combine matchers in a way that optimizes the user support in supervised ontology matching. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Ritze, D., Paulheim, H., & Eckert, K. (2013). Evaluation measures for ontology matchers in supervised matching scenarios. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8219 LNCS, pp. 392–407). https://doi.org/10.1007/978-3-642-41338-4_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free