Dependability assurance of systems embedding machine learning (ML) components—so called learning-enabled systems (LESs)—is a key step for their use in safety-critical applications. In emerging standardization and guidance efforts, there is a growing consensus in the value of using assurance cases for that purpose. This paper develops a quantitative notion of assurance that an learning-enabled system (LES) is dependable, as a core component of its assurance case, also extending our prior work that applied to ML components. Specifically, we characterize LES assurance in the form of assurance measures: a probabilistic quantification of confidence that an LES possesses system-level properties associated with functional capabilities and dependability attributes. We illustrate the utility of assurance measures by application to a real world autonomous aviation system, also describing their role both in i) guiding high-level, runtime risk mitigation decisions and ii) as a core component of the associated dynamic assurance case.
CITATION STYLE
Asaadi, E., Denney, E., & Pai, G. (2020). Quantifying Assurance in Learning-Enabled Systems. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12234 LNCS, pp. 270–286). Springer. https://doi.org/10.1007/978-3-030-54549-9_18
Mendeley helps you to discover research relevant for your work.