Lazy model expansion: Interleaving grounding with search

28Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Finding satisfying assignments for the variables involved in a set of constraints can be cast as a (bounded) model generation problem: search for (bounded) models of a theory in some logic. The state-of-the-art approach for bounded model generation for rich knowl-edge representation languages like Answer Set Programming (ASP) and FO(.-) and a CSP modeling language such as Zinc, is ground-and-solve : reduce the theory to a ground or propositional one and apply a search algorithm to the resulting theory. An important bottleneck is the blow-up of the size of the theory caused by the grounding phase. Lazily grounding the theory during search is a way to overcome this bottleneck. We present a theoretical framework and an implementation in the context of the FO(.-) knowledge representation language. Instead of grounding all parts of a theory, justi-cations are derived for some parts of it. Given a partial assignment for the grounded part of the theory and valid justi-cations for the formulas of the non-grounded part, the justi-cations provide a recipe to construct a complete assignment that satis-es the non-grounded part. When a justi-cation for a particular formula becomes invalid during search, a new one is derived; if that fails, the formula is split in a part to be grounded and a part that can be justi-ed. Experimental results illustrate the power and generality of this approach.

Cite

CITATION STYLE

APA

De Cat, B., Denecker, M., Stuckey, P., & Bruynooghe, M. (2015). Lazy model expansion: Interleaving grounding with search. Journal of Artificial Intelligence Research, 52, 235–286. https://doi.org/10.1613/jair.4591

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free