The why and how of trustworthy AI

  • Schmitz A
  • Akila M
  • Hecker D
  • et al.
N/ACitations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Artificial intelligence is increasingly penetrating industrial applications as well as areas that affect our daily lives. As a consequence, there is a need for criteria to validate whether the quality of AI applications is sufficient for their intended use. Both in the academic community and societal debate, an agreement has emerged under the term “trustworthiness” as the set of essential quality requirements that should be placed on an AI application. At the same time, the question of how these quality requirements can be operationalized is to a large extent still open.In this paper, we consider trustworthy AI from two perspectives: the product and organizational perspective. For the former, we present an AI-specific risk analysis and outline how verifiable arguments for the trustworthiness of an AI application can be developed. For the second perspective, we explore how an AI management system can be employed to assure the trustworthiness of an organization with respect to its handling of AI. Finally, we argue that in order to achieve AI trustworthiness, coordinated measures from both product and organizational perspectives are required.

Cite

CITATION STYLE

APA

Schmitz, A., Akila, M., Hecker, D., Poretschkin, M., & Wrobel, S. (2022). The why and how of trustworthy AI. At - Automatisierungstechnik, 70(9), 793–804. https://doi.org/10.1515/auto-2022-0012

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free