Learning the boundary of inductive invariants

3Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

We study the complexity of invariant inference and its connections to exact concept learning. We define a condition on invariants and their geometry, called the fence condition, which permits applying theoretical results from exact concept learning to answer open problems in invariant inference theory. The condition requires the invariant's boundary-the states whose Hamming distance from the invariant is one-to be backwards reachable from the bad states in a small number of steps. Using this condition, we obtain the first polynomial complexity result for an interpolation-based invariant inference algorithm, efficiently inferring monotone DNF invariants with access to a SAT solver as an oracle. We further harness Bshouty's seminal result in concept learning to efficiently infer invariants of a larger syntactic class of invariants beyond monotone DNF. Lastly, we consider the robustness of inference under program transformations. We show that some simple transformations preserve the fence condition, and that it is sensitive to more complex transformations.

Cite

CITATION STYLE

APA

Feldman, Y. M. Y., Sagiv, M., Shoham, S., & Wilcox, J. R. (2021). Learning the boundary of inductive invariants. Proceedings of the ACM on Programming Languages, 5(POPL). https://doi.org/10.1145/3434296

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free