Three Reasons Why: Framing the Challenges of Assuring AI

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Assuring the safety of systems that use Artificial Intelligence (AI), specifically Machine Learning (ML) components, is difficult because of the unique challenges that AI presents for current assurance practice. However, what is also missing is an overall understanding of this multi-disciplinary problem space. In this paper, a model is given that frames the challenges into three categories which are aligned to the reasons why they occur. Armed with a common picture of where existing issues and solutions “fit-in”, the aim is to help bridge cross-domain conceptual gaps and provide a clearer understanding to safety practitioners, ML experts, regulators and anyone involved in the assurance of a system with AI.

Cite

CITATION STYLE

APA

Fang, X., & Johnson, N. (2019). Three Reasons Why: Framing the Challenges of Assuring AI. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11699 LNCS, pp. 281–287). Springer Verlag. https://doi.org/10.1007/978-3-030-26250-1_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free