Logic, probability and learning, or an introduction to statistical relational learning

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Probabilistic inductive logic programming (PILP), sometimes also called statistical relational learning, addresses one of the central questions of artificial intelligence: the integration of probabilistic reasoning with first order logic representations and machine learning. A rich variety of different formalisms and learning techniques have been developed and they are being applied on applications in network analysis, robotics, bio-informatics, intelligent agents, etc. This tutorial starts with an introduction to probabilistic representations and machine learning, and then continues with an overview of the state-of-the-art in statistical relational learning. We start from classical settings for logic learning (or inductive logic programming) namely learning from entailment, learning from interpretations, and learning from proofs, and show how they can be extended with probabilistic methods. While doing so, we review state-of-the-art statistical relational learning approaches and show how they fit the discussed learning settings for probabilistic inductive logic programming. © 2008 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

De Raedt, L. (2008). Logic, probability and learning, or an introduction to statistical relational learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5249 LNAI, p. 5). Springer Verlag. https://doi.org/10.1007/978-3-540-88190-2_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free