Probabilistic inductive logic programming aka. statistical relational learning addresses one of the central questions of artificial intelligence: the integration of probabilistic reasoning with machine learning and first order and relational logic representations. A rich variety of different formalisms and learning techniques have been developed. A unifying characterization of the underlying learning settings, however, is missing so far. In this chapter, we start from inductive logic programming and sketch how the inductive logic programming formalisms, settings and techniques can be extended to the statistical case. More precisely, we outline three classical settings for inductive logic programming, namely learning from entailment, learning from interpretations, and learning from proofs or traces, and show how they can be adapted to cover state-of-the-art statistical relational learning approaches. © 2008 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
De Raedt, L., & Kersting, K. (2008). Probabilistic inductive logic programming. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 4911 LNAI, 1–27. https://doi.org/10.1007/978-3-540-78652-8_1
Mendeley helps you to discover research relevant for your work.