Efficient weight learning for Markov logic networks

142Citations
Citations of this article
101Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Markov logic networks (MLNs) combine Markov networks and first-order logic, and are a powerful and increasingly popular representation for statistical relational learning. The state-of-the-art method for discriminative learning of MLN weights is the voted perceptron algorithm, which is essentially gradient descent with an MPE approximation to the expected sufficient statistics (true clause counts). Unfortunately, these can vary widely between clauses, causing the learning problem to be highly ill-conditioned, and making gradient descent very slow. In this paper, we explore several alternatives, from per-weight learning rates to second-order methods. In particular, we focus on two approaches that avoid computing the partition function: diagonal Newton and scaled conjugate gradient. In experiments on standard SRL datasets, we obtain order-of-magnitude speedups, or more accurate models given comparable learning times. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Lowd, D., & Domingos, P. (2007). Efficient weight learning for Markov logic networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4702 LNAI, pp. 200–211). Springer Verlag. https://doi.org/10.1007/978-3-540-74976-9_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free