Risks of ignoring uncertainty propagation in AI-augmented security pipelines

2Citations
Citations of this article
N/AReaders
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The use of AI technologies is being integrated into the secure development of software-based systems, with an increasing trend of composing AI-based subsystems (with uncertain levels of performance) into automated pipelines. This presents a fundamental research challenge and seriously threatens safety-critical domains. Despite the existing knowledge about uncertainty in risk analysis, no previous work has estimated the uncertainty of AI-augmented systems given the propagation of errors in the pipeline. We provide the formal underpinnings for capturing uncertainty propagation, develop a simulator to quantify uncertainty, and evaluate the simulation of propagating errors with one case study. We discuss the generalizability of our approach and its limitations and present recommendations for evaluation policies concerning AI systems. Future work includes extending the approach by relaxing the remaining assumptions and by experimenting with a real system.

Cite

CITATION STYLE

APA

Mezzi, E., Papotti, A., Massacci, F., & Tuma, K. (2025). Risks of ignoring uncertainty propagation in AI-augmented security pipelines. Risk Analysis, 45(12), 4469–4489. https://doi.org/10.1111/risa.70059

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free