Abstract
Many real-world problems, including inference in Bayes Nets, can be reduced to #SAT, the problem of counting the number of models of a propositional theory. This has motivated the need for efficient #SAT solvers. Currently, such solvers utilize a modified version of DPLL that employs decomposition and caching, techniques that significantly increase the time it takes to process each node in the search space. In addition, the search space is significantly larger than when solving SAT since we must continue searching even after the first solution has been found. It has previously been demonstrated that the size of a DPLL search tree can be significantly reduced by doing more reasoning at each node. However, for SAT the reductions gained are often not worth the extra time required. In this paper we verify the hypothesis that for #SAT this balance changes. In particular, we show that additional reasoning can reduce the size of a #SAT solver's search space, that this reduction cannot always be achieved by the already utilized technique of clause learning, and that this additional reasoning can be cost effective. Copyright ©2007, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Cite
CITATION STYLE
Davies, J., & Bacchus, F. (2007). Using more reasoning to improve #SAT solving. In Proceedings of the National Conference on Artificial Intelligence (Vol. 1, pp. 185–190).
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.