Formal assurance for cooperative intelligent autonomous agents

6Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Developing trust in intelligent agents requires understanding the full capabilities of the agent, including the boundaries beyond which the agent is not designed to operate. This paper focuses on applying formal verification methods to identify these boundary conditions in order to ensure the proper design for the effective operation of the human-agent team. The approach involves creating an executable specification of the human-machine interaction in a cognitive architecture, which incorporates the expression of learning behavior. The model is then translated into a formal language, where verification and validation activities can occur in an automated fashion. We illustrate our approach through the design of an intelligent copilot that teams with a human in a takeoff operation, while a contingency scenario involving an engine-out is potentially executed. The formal verification and counterexample generation enables increased confidence in the designed procedures and behavior of the intelligent copilot system.

Cite

CITATION STYLE

APA

Bhattacharyya, S., Eskridge, T. C., Neogi, N. A., Carvalho, M., & Stafford, M. (2018). Formal assurance for cooperative intelligent autonomous agents. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10811 LNCS, pp. 20–36). Springer Verlag. https://doi.org/10.1007/978-3-319-77935-5_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free