Neural networks have many applications in safety and mission critical systems. As industrial standards in various safety-critical domains require developers of critical systems to provide safety assurance, tools and techniques must be developed that enable effective creation of safety evidence for AI systems. In this position paper, we propose the use of rules extracted from neural networks as artefacts for safety evidence. We discuss the rationale behind the use of rules and illustrate it using the MNIST dataset.
CITATION STYLE
Beyene, T. A., & Sahu, A. (2020). Rule-Based Safety Evidence for Neural Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12235 LNCS, pp. 328–335). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-55583-2_24
Mendeley helps you to discover research relevant for your work.