A Safety Framework for Critical Systems Utilising Deep Neural Networks

28Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Increasingly sophisticated mathematical modelling processes from Machine Learning are being used to analyse complex data. However, the performance and explainability of these models within practical critical systems requires a rigorous and continuous verification of their safe utilisation. Working towards addressing this challenge, this paper presents a principled novel safety argument framework for critical systems that utilise deep neural networks. The approach allows various forms of predictions, e.g., future reliability of passing some demands, or confidence on a required reliability level. It is supported by a Bayesian analysis using operational data and the recent verification and validation techniques for deep learning. The prediction is conservative – it starts with partial prior knowledge obtained from lifecycle activities and then determines the worst-case prediction. Open challenges are also identified.

Cite

CITATION STYLE

APA

Zhao, X., Banks, A., Sharp, J., Robu, V., Flynn, D., Fisher, M., & Huang, X. (2020). A Safety Framework for Critical Systems Utilising Deep Neural Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12234 LNCS, pp. 244–259). Springer. https://doi.org/10.1007/978-3-030-54549-9_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free