Learning Branching Heuristics for Propositional Model Counting

6Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Propositional model counting, or #SAT, is the problem of computing the number of satisfying assignments of a Boolean formula. Many problems from different application areas, including many discrete probabilistic inference problems, can be translated into model counting problems to be solved by #SAT solvers. Exact #SAT solvers, however, are often not scalable to industrial size instances. In this paper, we present Neuro#, an approach for learning branching heuristics to improve the performance of exact #SAT solvers on instances from a given family of problems. We experimentally show that our method reduces the step count on similarly distributed held-out instances and generalizes to much larger instances from the same problem family. It is able to achieve these results on a number of different problem families having very different structures. In addition to step count improvements, Neuro# can also achieve orders of magnitude wall-clock speedups over the vanilla solver on larger instances in some problem families, despite the runtime overhead of querying the model.

Cite

CITATION STYLE

APA

Vaezipoor, P., Lederman, G., Wu, Y., Maddison, C., Grosse, R. B., Seshia, S. A., & Bacchus, F. (2021). Learning Branching Heuristics for Propositional Model Counting. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 14A, pp. 12427–12435). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i14.17474

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free