Scalable training of markov logic networks using approximate counting

16Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we propose principled weight learning algorithms for Markov logic networks that can easily scale to much larger datasets and application domains than existing algorithms. The main idea in our approach is to use approximate counting techniques to substantially reduce the complexity of the most computation intensive sub-step in weight learning: computing the number of groundings of a first-order formula that evaluate to true given a truth assignment to all the random variables. We derive theoretical bounds on the performance of our new algorithms and demonstrate experimentally that they are orders of magnitude faster and achieve the same accuracy or better than existing approaches.

Cite

CITATION STYLE

APA

Sarkhel, S., Venugopal, D., Pham, T. A., Singla, P., & Gogate, V. (2016). Scalable training of markov logic networks using approximate counting. In 30th AAAI Conference on Artificial Intelligence, AAAI 2016 (pp. 1067–1073). AAAI press. https://doi.org/10.1609/aaai.v30i1.10119

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free