Benchmarking DAML+OIL repositories

N/ACitations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We present a benchmark that facilitates the evaluation of DAML+OIL repositories in a standard and systematic way. This benchmark is intended to evaluate the performance of DAML+OIL repositories with respect to extensional queries over a large data set that commits to a single realistic ontology. It consists of the ontology, customizable synthetic data, a set of test queries, and several performance metrics. Main features of the benchmark include a plausible ontology for the university domain, a repeatable data set that can be scaled to an arbitrary size, and an approach for measuring the degree to which a repository returns complete query answers. We also show a benchmark experiment for the evaluation of DLDB, a DAML+OIL repository that extends a relational database management system with description logic inference capabilities. © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Guo, Y., Heflin, J., & Pan, Z. (2003). Benchmarking DAML+OIL repositories. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2870, 613–627. https://doi.org/10.1007/978-3-540-39718-2_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free