Model checking learning agent systems using Promela with embedded C code and abstraction

1Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

As autonomous systems become more prevalent, methods for their verification will become more widely used. Model checking is a formal verification technique that can help ensure the safety of autonomous systems, but in most cases it cannot be applied by novices, or in its straight “off-the-shelf” form. In order to be more widely applicable it is crucial that more sophisticated techniques are used, and are presented in a way that is reproducible by engineers and verifiers alike. In this paper we demonstrate in detail two techniques that are used to increase the power of model checking using the model checker Spin. The first of these is the use of embedded C code within Promela specifications, in order to accurately reflect robot movement. The second is to use abstraction together with a simulation relation to allow us to verify multiple environments simultaneously. We apply these techniques to a fairly simple system in which a robot moves about a fixed circular environment and learns to avoid obstacles. The learning algorithm is inspired by the way that insects learn to avoid obstacles in response to pain signals received from their antennae. Crucially, we prove that our abstraction is sound for our example system—a step that is often omitted but is vital if formal verification is to be widely accepted as a useful and meaningful approach.

Cite

CITATION STYLE

APA

Kirwan, R., Miller, A., & Porr, B. (2016). Model checking learning agent systems using Promela with embedded C code and abstraction. Formal Aspects of Computing, 28(6), 1027–1056. https://doi.org/10.1007/s00165-016-0382-2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free