Proof-carrying plans

4Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

It is becoming increasingly important to verify safety and security of AI applications. While declarative languages (of the kind found in automated planners and model checkers) are traditionally used for verifying AI systems, a big challenge is to design methods that generate verified executable programs. A good example of such a “verification to implementation” cycle is given by automated planning languages like PDDL, where plans are found via a model search in a declarative language, but then interpreted or compiled into executable code in an imperative language. In this paper, we show that this method can itself be verified. We present a formal framework and a prototype Agda implementation that represent PDDL plans as executable functions that inhabit types that are given by formulae describing planning problems. By exploiting the well-known Curry-Howard correspondence, type-checking then automatically ensures that the generated program corresponds precisely to the specification of the planning problem.

Cite

CITATION STYLE

APA

Schwaab, C., Komendantskaya, E., Hill, A., Farka, F., Petrick, R. P. A., Wells, J., & Hammond, K. (2019). Proof-carrying plans. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11372 LNCS, pp. 204–220). Springer Verlag. https://doi.org/10.1007/978-3-030-05998-9_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free