Optimizing enterprise-scale OWL 2 RL reasoning in a relational database system

34Citations
Citations of this article
43Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

OWL 2 RL was standardized as a less expressive but scalable subset of OWL 2 that allows a forward-chaining implementation. However, building an enterprise-scale forward-chaining based inference engine that can 1) take advantage of modern multi-core computer architectures, and 2) efficiently update inference for additions remains a challenge. In this paper, we present an OWL 2 RL inference engine implemented inside the Oracle database system, using novel techniques for parallel processing that can readily scale on multi-core machines and clusters. Additionally, we have added support for efficient incremental maintenance of the inferred graph after triple additions. Finally, to handle the increasing number of owl:sameAs relationships present in Semantic Web datasets, we have provided a hybrid in-memory/disk based approach to efficiently compute compact equivalence closures. We have done extensive testing to evaluate these new techniques; the test results demonstrate that our inference engine is capable of performing efficient inference over ontologies with billions of triples using a modest hardware configuration. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

Kolovski, V., Wu, Z., & Eadon, G. (2010). Optimizing enterprise-scale OWL 2 RL reasoning in a relational database system. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6496 LNCS, pp. 436–452). Springer Verlag. https://doi.org/10.1007/978-3-642-17746-0_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free