Towards trustworthy AI for autonomous systems

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Trust remains a major challenge in the development, implementation and deployment of artificial intelligence and autonomous systems in defence and law enforcement industries. To address the issue, we follow the verification as planning paradigm based on model checking techniques to solve planning and goal reasoning problems for autonomous systems. Specifically, we present a novel framework named Goal Reasoning And Verification for Independent Trusted Autonomous Systems (GRAVITAS) and discuss how it helps provide trustworthy plans in uncertain and dynamic environment.

Cite

CITATION STYLE

APA

Bride, H., Dong, J. S., Hóu, Z., Mahony, B., & Oxenham, M. (2018). Towards trustworthy AI for autonomous systems. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11232 LNCS, pp. 407–411). Springer Verlag. https://doi.org/10.1007/978-3-030-02450-5_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free