Rule-Based Safety Evidence for Neural Networks

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Neural networks have many applications in safety and mission critical systems. As industrial standards in various safety-critical domains require developers of critical systems to provide safety assurance, tools and techniques must be developed that enable effective creation of safety evidence for AI systems. In this position paper, we propose the use of rules extracted from neural networks as artefacts for safety evidence. We discuss the rationale behind the use of rules and illustrate it using the MNIST dataset.

Cite

CITATION STYLE

APA

Beyene, T. A., & Sahu, A. (2020). Rule-Based Safety Evidence for Neural Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12235 LNCS, pp. 328–335). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-55583-2_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free