Learning invariants using decision trees and implication counterexamples

  • Garg P
  • Neider D
  • Madhusudan P
  • et al.
N/ACitations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Inductive invariants can be robustly synthesized using a learning model where the teacher is a program verifier who instructs the learner through concrete program configurations, classified as positive, negative, and implications. We propose the first learning algorithms in this model with implication counter-examples that are based on machine learning techniques. In particular, we extend classical decision-tree learning algorithms in machine learning to handle implication samples, building new scalable ways to construct small decision trees using statistical measures. We also develop a decision-tree learning algorithm in this model that is guaranteed to converge to the right concept (invariant) if one exists. We implement the learners and an appropriate teacher, and show that the resulting invariant synthesis is efficient and convergent for a large suite of programs.

Cite

CITATION STYLE

APA

Garg, P., Neider, D., Madhusudan, P., & Roth, D. (2016). Learning invariants using decision trees and implication counterexamples. ACM SIGPLAN Notices, 51(1), 499–512. https://doi.org/10.1145/2914770.2837664

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free