Does software have to be ultra reliable in safety critical systems?

4Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

It is difficult to demonstrate that safety-critical software is completely free of dangerous faults. Prior testing can be used to demonstrate that the unsafe failure rate is below some bound, but in practice, the bound is not low enough to demonstrate the level of safety performance required for critical software-based systems like avionics. This paper argues higher levels of safety performance can be claimed by taking account of: 1) external mitigation to prevent an accident: 2) the fact that software is corrected once failures are detected in operation. A model based on these concepts is developed to derive an upper bound on the number of expected failures and accidents under different assumptions about fault fixing, diagnosis, repair and accident mitigation. A numerical example is used to illustrate the approach. The implications and potential applications of the theory are discussed. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Bishop, P. (2013). Does software have to be ultra reliable in safety critical systems? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8153 LNCS, pp. 118–129). https://doi.org/10.1007/978-3-642-40793-2_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free